cant seem to get shading samples override to work?
Ive got some small geometry aliasing problems -basically some railings on a boat that flash considerably when small in frame.
im using mental ray and raytracing. When i put the anti-aliasing samples to
2(min sample) and 4(max sample level) in the Renderglobals it stops the aliasing very well - but overall render time rockets.
I am trying to use the shading samples override in the attributes editor for each piece of small geometry - but when i compare renders there is no improvement at all - is there something im missing?
Geometry anti-aliasing override doesnt seem to change anything either?
First off, I don't believe object samples override and shading samples override are translated by mental ray. Which is why in maya 7.0 then placed new mental ray specific object shading overrides on geometry. Look in the mental ray roll out on the objects Attribute Editor. you'll see a min max object sampling and some other overrides for final gather and GI, etc. Use these for Mental Ray, not the older maya ones.
Also, min 2 max 4 are EXTREMELY large and slow settings for anti-aliasing in any system. That pretty much means you are sampling every single pixel in your image a minimum of 16 times, before any adaptivity happens, and a maximum of a whopping 256 times if pixels don't fall within a certain relative contrast before reaching level 4 sampling. All the adaptive sampling based on a contrast threshold, much like maya's native software renderer. Here is a decent explanation of what mental ray's adaptive sampling is doing:
A better approach is to keep min samples at say 0 or 1 (0 being a minimum of 1 sample per pixel,and 1 being a minimum of 2 samples per pixel, which is still a lot). And perhaps keep your maximum at 4 but I recommend 3, usually you don't want your maximum to be any higher than 3 more than your minimum value. So if your min is 0 max should be no more than 3, otherwise it slows down even more. Then adjust your contrast R G B A settings. Drop them down to 0.1, or 0.05. What this does is forces mental ray to be far more picky when neighboring pixel values are slightly different contrast, which then forces mental ray to subdivide that pixel into the next level of sampling.
To illustrate the contrast controls, say you have your min max set to 0 and 2 respectively. You also have your contrast R G B A at 0.5 for all 4 (which is way to high a value). Now, what mental ray does is since the min samples is set to 0, it will sample every single pixel 1 time before ever consulting the contrast between neighbors (which is why a min of 2 is way too much usually, it takes the adaptive out of adaptive). Then mental ray will compare the neighboring pixels' red, green, blue and alpha components separately. So it says, pixel A and pixel B what are your relative red values? Pixel A is 0.2 pixel B is 0.8. The difference between that is over the contrastR tolerance 0.5, so it says "I must break you!" and subdivides the pixels at a level of 2, which is 4 times (roughly every corner of a pixel gets sampled). Mental ray will not automatically go directly to the max value of level 3 that is set. It is smart enough to only divide as much as necessary. So, it divides the pixels and samples the sub-pixels, then it says: Pixel A and B, lets try this again...What are your relative Red contrasts? Pixel A says 0.35, pixel B says 0.65. Mental ray is satisfied because that contrast difference is below the contrastR tolerance of 0.5. So it stops sampling that pixel and moves on.
Now, if contrastR tolerance were set to a much lower value, perhaps 0.05, then mental ray would undoubtedly subdivide these pixels up to the maximum sampling level of 3 because there really is no way to get a difference of 0.05 contrast if the pixel A starts at 0.2 and pixel B starts at 0.8.
The filtering type and size play a large role in how these subdivided pixel samples get averaged back together. Many people mistake the multi-pixel filtering for just a post process, it is not. So when you choose Gaussian 3.0 x 3.0, you aren't picking a filter in photoshop that simply blurs the image. You are actually picking the method for blending all these 10's of samples per pixel together with a weighted average from the center of the pixel out over a radius of neighboring pixels. The larger the filter size, the farther across pixels the sample blending will happen.
Just to shock you, if you're rendering an image that is standard broadcast resolution of 720x486 with a min and max of 2 & 4, you will be sampling that image a minimum of 5,598,720 just for anti-aliasing (which is shooting rays too, because you're probably ray tracing). Thats 5 million samples for the whole image...and thats *before* mental ray even checks to see if the pixel neighbors meet the contrast tollerance. If none of them do...well, worst case scenario is you take 90 million samples of the image, which is actually not probable. Compare that to sampling each pixel a min max of 0 and 3. You'll sample the image a minimum of 349,920, and non-probable max of 22,394,880. An order of magnitude less time.
Hope that helps a lot of people :)
Thanks alot for the time put into that answer.much appreciated.