I'm in the process of building a test set for my final year project at uni.
The whole thing is set in a lift, so I wanted to build the set by using a polygon and reversing the normals, so the cube is inside out.
My scene will include a fair bit of lighting - and at the time of writing, I'm looking at about 3/4 cameras set up in different locations in the lift.
This is where yet another n00b questions comes into play
When rendering a camera, will it render what's not visible to the camera's lens? For example, my lift will have a fairly complex button system, but won't be on show for many of the shots, so when rendering a camera that's not displaying this, will it calculate it's texture and lighting? If so, is there a way so turn this off when it's not in the render view?
My second question regards the reverse poly cube that I want to fashion into my set: If I apply global illumination to my scene, will it be shown inside the poly cube???
Generally you don't need to worry about off-camera details. Renderers will only actually render what is visible in the frame. The objects will take up a bit of RAM and so on during the rendering, but that's probably negligible unless your scene is incredibly complex. In any case, if you're using advanced techniques such as raytracing and GI, you might not want to hide those elements since they can contribute to the scene even when they're not directly in view (i.e. they might be visible in a reflection somewhere, or might cast light into the scene from out of frame).
As for the GI, a completely enclosed area won't receive illumination from the background environment (of course) but GI can still be used as a lighting method, if you apply the correct settings to your lights and surfaces.