32bit float space
I'm having some trouble wrapping my brain around the concept of float space. As an experiment, I rendered out a sequence (video originated from dvcpro50 timeline) with some text and graphics in both 32 and 8 bit float space. I opened both renders in Quicktime but found a negligible differnce in quality...actually no difference.
Should rendering a 32bit vs 8bit composite improve the quality of my of my final render, or will Motion render at full quality regardless even if my graphics are composited in 8bit?
When is it beneficial to work in 32bit?
Also, I assume that if you send a dvcpro50 sequence (codec I most frequently work with) to Motion it has no effect on the quality of the video? In other words, is aspect ratio the only important consideration?
I've been meaning to write up a tutorial/article that covers this, so now I have a chance to test out my explanation. If it comes across as a bunch of squawking noises, please bear with me ;)
There can be a number of reasons to do your compositing in 16- or 32-bit float. Most of them, though, don't apply to video work.
First, is film: film work is typically done on 10-bit-log Cineon files that are converted into 16- or 32-bit linear space, or OpenEXR files, which are already 32-bit. The extra headroom (over 8-bit) allows for a greater range of values, which film is capable of. This includes superwhites and blacks (think explosions, sun reflections, etc). Motion allows for the import and export of OpenEXR sequences, so it would be possible to do film-level compositing in Motion, or generate 32-bit elements for compositing in other apps. A little bird told me that ILM used Motion to generate some of the smoke elements for the dragon-fight sequence in the last Harry Potter, for example. If your media starts out as video footage, though, it almost certainly has no information (to start with) that requires 16/32-bit headroom.
There are times, however, when you might perform certain operations in a project using 8-bit media that would benefit from the increased accuracy of float processing. Without getting too far into the compositing math: compositing and image operations require that elements get pre-multiplied and un-premultiplied over and over. Multiplying and dividing, again and again. In 8-bit, RGBA each have 255 levels, and they are always integers (whole numbers). So in the course of those operations, rounding occurs and can lead to errors, like banding. Try this in Motion:
1. Create a new, empty project.
2. Add a Clouds generator to the Canvas.
3. Set the Opacity of the generator to 10%.
4. Duplicate the generator three times (Cmd+D).
See the posterization that has occurred? This is an extreme example, brought on by comping low-opacity "soft" images together, but you can see the effects of the rounding errors. Now, take that same project and shift it into 16-bit. You'll see a much better result. Moving into float space allows for fractional values: black is 0 and white (on your screen) is 1.0. With the extra precision, a lot more accuracy is possible. Float also allows for values greater than 1, those superwhites I mentioned earlier. In 8-bit, all values above 255 (white) are clamped, but float allows for much higher values. Again, though, it depends on your destination medium. If you are shooting back out to film, which holds a much larger range of values than video, having those float values is essential. But since TVs and computer monitors are 8-bit (for now), you typically don't need to worry about float output.
Another issue is speed and memory footprint. Motion does all its image processing on the GPU, so if you have a 128MB card, it has 128MB of space for processing your 8-bit images. But if you're working in 32-bit float, which requires four times as much memory space, you now have 32MB (one-fourth of 128) of space on your GPU. Because of the increased overhead, processing in float can be quite slow. If you have an Nvidia GPU, though, 16-bit processing is accelerated, so it's not nearly as brutal, speedwise, as 32.
The long and short of it: mathematically, things will be more accurate if you do your work in 16- or 32-bit. Your work will also go slower and hit the limits of your GPU sooner. If you're working on video, it will likely make no difference in your results, but as shown in the above example, it CAN make a difference in very specific situations.
Wow, cool stuff. Thank you for such a detailed explanation. However, I do have a couple follow up questions. First of all, if I perform an operation like the example you mention above in 32bit float and drop that composite back into my dvcpro50 sequence in FCP, should there be a noticeable difference?
The majority of my work consitsts of text sequencing and building custom lower 3rds. I guess the question I have is, regardless of my video compression, is 8bit gonna be my best bet? Even if I am layering graphics in a 10 bit uncompressed timeline? Thanks again.
Good questions. As far as I can tell, Motion projects that are processed using the Quicktime Component (i.e. bring a Motion project into FCP as a media file) only ever get processed in 8-bit. I think this is probably a limitation of Quicktime, which doesn't natively support float. To get the benefit of float processing, you'd need to export your projects from Motion (not Compressor) and then use those movies in FCP et al. The renders are 8-bit movies, but the math to get those results will have been done in float.
For the type of work you're describing, though, I think 8-bit will do the trick. If you ever run into significant banding problems or the like, then you can always try shifting the project into float and see how it looks. Unfortunately, by the time your work gets to a viewer, it's usually been compressed by broadcasters or cable/satellite operators so much that it doesn't matter :(