Questions for Rafael
Could you please explain what "Render all YUV material in High Precision YUV" does to improve the look of your SD renders?
Also, I wanted to compliment you on the beautiful video on your website.
Has a very nice almost 3-D look!
How are you achieving the vignette you use?
2.5GHz Quad-core PowerPC G5
Final Cut Studio 2
2.5GHz Quad-core PowerPC G5
Final Cut Studio 2
Rafael is in Bangkok at the moment getting the firmware upgraded on his EX1. He should be back home in Laos shortly.
I'm obviously not Rafael but if it helps I'm glad to offer my understanding of this setting. Basically the "Render all YUV material in high-precision YUV" checkbox switches FCP effect rendering engine to 32 bit floating point precision math. In situations where you're applying applying multiple filters to a clip and/or 10 bit aware filters such as color correction, then the greater precision will improve the quality of the final render file even though the original clip may have only 8-bits of color information. Note a) that not all (in fact only a very few) filters are actually 10 bit capable, and b) that your render time will significantly increase with this setting. However, as a final step when mastering, especially in such situations as described above and rendering for a 10 bit output, as Raf describes, then you may feel the improvement in your output makes it worthwhile. Its probably worth noting however that the general consensus is that this setting applies solely to the quality of rendering of effects (as noted, changing the precision of the effects rendering engine) and therefore would not be essential for processing SD downconversions. The quality of scaling, amongst other things, is generally understood to be dictated by the "Motion Filtering Quality" setting (also in the Video Processing tab) and should be set to Best for final output (something Rafael also advocates).
Rafael's contention appears to be that the render in high precision setting in and of itself applies much more broadly than simply the effects rendering engine, and that the setting is an essential one when handling scaling in FCP too. I'm not especially privy to the inner workings of FCP's processing pipeline and despite what might have been generally understood before, it could be that Rafael is quite correct ... but either way, if the specific settings and workflow he's hit upon can produce markedly better results than when not applying that particular recipe, well then it seems to me its probably worth the effort!
Hope thats a useful perspective to add to the pot
Regarding vignettes ... you can create it yourself by overlaying a simple matte shape, or use FCP' built-in Vignette filter (in the Video Filters > Stylize bin), or you can download a free Vignette filter from Marcus Herrick's website or theres another free Vignette filter at Alex Gollner's website, you can even use my Region Blur filter for it, and I dare say there are many more besides.
Thanks Andy...nice response!
2.5GHz Quad-core PowerPC G5
Final Cut Studio 2
Ohhh!!! What a beautiful girls in Bangkok!!!;-)
Well In FC there are too many things that we have to guess.
Basically I think the same that Andy. This is why when talking on resizing I recommend not only "High Precision Rendering" but also "Motion Rendering: BEST".
My guess is that when you trigger the 32b FP rendering, this affect also to the "Motion Effects".
otherwise it wouldn't make sense: If you are rendering your Filters in 10b and after the crop, resize or whatever other transformation is done in 8b we are going nowhere. We end up with 8b information wrote in 10b.
One of the problems, as Andy points, is that we really don't know which effects are wrote to work in 10b, but I understand that all the "Motion Effects" are. Otherwise your Digibeta production would be crunched to 8b on Motion Rendering.
To see how those setting really affects the picture quality when rendering Motion Effects should be necessary to make a more or less serious test. Exporting the same clip with different setting and comparing the picture.
Many people think that rendering Filters in 10b (on 8b footage ) is only useful when rendering "multiple filters". Look the differnce when rendering a single filter and without need of rendering:
- Drop a clip of DV footage in an DV sequence.
- Drop the Nattress "Chroma.
- Zoom a blocky are of the picture (or better a red saturated object spreading his Chroma out of the edges).
- Set the filter ON/OFF few times.
Well, as you can see, the filter DO NOTHING.
- Change the sequence codec to 8b.
- Check the filter ON/OFF.
Great. now it works. There is a big difference.
Now just do the same with 10b (and forced High Precision): Wow!!
With that and a good CC, your picture it doesn't look as DV anymore.
Sorry If I wrote too much and I've been too repetitive.
Yesterday when I came back to Vientiane I posted few wrong answer.
BTW, I will report in my first experience with the MxR adaptor (got the adaptor/no cards) in the correspondent thread.
Only the below filters will work in 32-bit floating point if your sequence is set for high-precision rendering in the Video Processing tab of the
Sequence Settings window ... none of the others!
Color Corrector 3-way
... and the Cross Dissolve is the only transition
A very short list. And a list to keep.
Third part plugins is the only solution.
And what about the Generators, Andy, any idea?
BTW, you know the insides of F ( I think that better than anybody else in the FC forum). I guess that developers must have enough information about how FC manage all those matters. is this information accessible? Or if even is accessible: Is too hard to understand for a simple FC user?
I guess the basic thing that we can understand from either the FxPlug SDK and/or the FxBuilder environment is that all scaling and motion is handled prior to effects handling (unless of course the scaling or motion is a product of the filter itself).
A filter is simply handed the image data upon which it is required to act, and that data is received at the resolution of the sequence ... in fact a filter has no direct means of even knowing what the original format or specifications of the source image were, it has only the image data it receives after the fact. (Thats why some filters, for example, Nattress Standards Conversion filters require that you drop the original master-clip into the applied filter's image well, in order to pass it the unscaled and native source so that it can better handle the required processing independently). From such a perspective, even without definitive information or inside knowledge, then it's reasonably safe to assume that FCP's data processing pipeline is that image data is initially processed according to a clip's initial bit depth and resolution, the sequence bit depth and resolution and of course the sequence's motion filtering quality setting, and only thereafter is the image data processed according to the coding of a given filter and the sequence's high precision setting.
I guess if the hypothesis is correct then it ought to be testable by taking a raw clip (no filters) and dropping it into an uncompressed 10 bit timeline; set motion filtering quality to best, turn on high precision rendering then export. Rinse and repeat but without the high precision rendering setting and compare the 2 exported files ... If there is no difference then it would seem to support that high precision rendering has no affect on the quality of scaling alone, and further tests might show that the efficacy of the switch is only apparent if and where appropriate filters or combinations of filters are applied.
Well, I might have a look at that when I get some extra time (ha!) ... but in the meantime, whatever my tea leaves reading may or may not conclude, Apple themselves state in the manual that “In most cases, you should choose to render your sequence using 32-bit floating-point space (called high-precision YUV) for final rendering before output or export.” And indeed, "in most cases" how many of us turn pics around without any form of filter based correction? (Hmmm, I suppose if all the correction is handled in Color ... but I'm risking going off on another tangent)
Very interesting exposition.[Andy Mees] "it's reasonably safe to assume that FCP's data processing pipeline is that image data is initially processed according to a clip's initial bit depth and resolution, the sequence bit depth and resolution and of course the sequence's motion filtering quality setting, and only thereafter is the image data processed according to the coding of a given filter and the sequence's high precision setting. "
Yes I think that is the pipeline. In the moment to drop any clip in the time-line this is "converted' to 444.
How this stuff is processed will depends on:
- The original bit depth on the footage: 10b material is ALWAYS processed in 32bFP.
- The "High Precision..": This will quantify the 8b values as 10b and will process in 32bFP.
- The bit depth allowed by the filters in the pipeline: If in a 32bb Process you introduce an 8b element you go back to 8b wrote as 10b.
The Motion effects would be applied afterward with three different sets of filters (qualities) and the bit depth of the process. So in fact we have six different rendering options.
Then it comes the compression phase: The 444 picture with all the filters and Motion Effects on, is compressed to the selected codec with the codec bit depth. I don't think the sequence codec and bit depth has any relevance until this moment.
Many times have been told that when you send something to Compressor you are sending the picture right before the compression step. This could be easily checked. Just sending a DV clip with same render settings and filters from a DV and a 10b Unc sequence and to get the same job done in Compressor. If that is true, out of Compressor we should get the same picture pixel by pixel.
This could be easily achieved with the Solorio's "Whites Count" test
[Andy Mees] "” And indeed, "in most cases" how many of us turn pics around without any form of filter based correction? ("
Here I don't catch you. Are you talking about the limitations when using the "rotate" tool in the Motion tab? I don't stop to recommend to the people don't use it. Use instead the FC 3d filter, or better: Andy's better 3D. This is the one that I use whenever I forget to set the tripod properly;-)
Some homework to do Andy. Something perfect for the Sunday evening during the rainy season.
We can start in few weeks.
There's a great 3 part article by Kevin P. McAuliffe called "How To Get The Most Out Of Your Easy Setups." I've been waiting weeks and weeks for permission from the author to quote him for purposes of clarification on this very topic but my request has been ignored. By the lack of a response I'm now going to assume that it's okay to post the quote as long as credit is payed to the author (which I have now done).
In Part 2 of Mr. McAuliffe's 3 part series he states the following:
"RENDER ALL YUV MATERIAL IN HIGH-PRECISION YUV
This is pretty self explanatory, as it makes sure you cover all your bases. Any YUV material will be rendered with 32-bit float. Again, longer renders with better results.
One thing that is important to keep in mind is that 32-bit float (or high-precision YUV) will not make your actual footage look any better. You need to talk to your DOP or graphic artist about that. All it does is give you better quality on your effects work."
Anyone who wants the link to the actual article can email me offline.