Not seeing 10 Bit precision into M100 HD from AE
I've had a nagging concern about rendering from AE at 16 (and now 32) bit to an intermediate or directly into M100HD s native codec (which is 10 bit) and maintaining 10 bit into the HD system regardless of which render codec is being used. This came up with the rendering of grads & 3D lighting in AE still looking fairly banded to me. I am sure I am missing something but it seems that the 16 bit precsion is not being maintained all the way into M100HD For this test I rendered to both the Media 100 HD codec at 10 bit and to the Microcosm codec from Digital Anarchy using the following:
-made a 1024x10 pixel comp, 2 seconds long, 29.97 fps
-made a new solid, comp size
-Apply the Ramp filter (16bpc capable), pure black to pure white, from 0,0 to 1024,0
-project set to 16 bpc
-rendered out to the codec of choice, being sure to select Trillions in the Output Module.
-reimported that rendered file made, adjust the Info Palette units to a 10 bit scale (0-1024), and slowly moved the cursor over the rendered file , and make sure for each horizontal pixel I pass the cursor over moving from left to right the color value increments one value at a time.
Results show that the M100HD codec is losing the precision with the values incrementing by 4. The Microcosm values are incrementing by 1, maintaining the 16 bpc information. If I then take the Microcosm render and inport into M100 HD at 10 bit, the resultant rendered file when evaluated in the same way is now losing the 16bpc information as well.
So, does this seem odd to anyone? Am I mssing something?
Without knowing your exact end-game, I cannot recommend a proper workflow for you. With that said, here is what I believe your seeing. When you select to render out your comp to the Media 100 HD codec, your resulting file is going to be 10-bit YUV. When you then re-import that file into AE you are seeing a 8-bit RGB image...Here is why...
The way the Media 100 HD codec is written allows applications to specify what mode the codec needs to be in (we call this interfaces). There is the YUV mode and the RGB mode. Since AE is an RGB application your 10-bit YUV image is being used in 8-bit RGB mode (or the 8-bit RGB interface) within AE. The actual source file is still 10-bit YUV but being displayed as 8-bit RGB.
You could always experiment with different codecs to get a better result. Other codecs may expose you to different YUV/RGB interfaces and give the proper results. You are never required to work in the Media 100 HD codec within our application so feel free to run some tests.
I hope this helps you.
Thanks for the reply. A little background on what we're doing in part with M100HD/AE is compositing spot using AE with source material coming in part from M100. Final step is then going back to M100 for output.
For speed we have been rendering out directly to the HD codec so that the inport to the HD system consists of the header re-write only and not re-rendering the file into the HD codec. My understanding had been that if we were rendering to another intermediate codec we'd end up in HD codec eventually anyway, but I suppose I could render out to Apple uncompressed or another natively supported codec.
We never pre render anything that we're using within AE to the HD codec, so having AE truncate it to 8bpc is not a problem, I was just trying to evaluate the issue by observing the rendered output from the HD codec.
We were an early HD system purchaser so we are running on a G5 PCI X system with 10.1.4, so are there any software related issues that I should know about that might be playing a factor in this?
Thanks again for having this discussion with me about this issue.