questions about color correction (conversion)
Hi, i'm relatively new to color correction and i have some things confusing me and i would like to get some help from some of you who might be able to clear them up for me.
i've recently been shooting with the a7s and there are some things i don't quiet understand very well. like the the conversion of footage to rec709 and LUTs.
So, let's say i shoot on slog2 with a proper exposed image, i should have some 12-14 stops of dynamic range to work with. so as i understand my footage straight out of the camera should've 12-14 stops of DR and then i color correct/grade it on colorista/davinci (without any LUT applied) and export as a h.264 codec for youtube/vimeo. but as i understand vimeo/youtube only displays 6 stops of DR.
So when exactly did i convert the 12-14 stops into 6 stops for rec709???? How do i know i properly fit the 12 stops into the 6 stops of rec709 for displaying on the internet and tv???? How am i sure i did it correctly for maximum use of the 12-14 stops??
and regarding the LUTS, as far as i understand them, they're just settings applied to the footage as contrast, saturation, sharpen etc??? is that correct??
so LUTs are sort of presets applied to the footage that give you a first color correction to then grade it as wanted??
if i'm grading the footage as to my taste of what looks good, do i need to apply LUTS??? why are they recommended??
the main question basically, is how do i know i properly converted the 12-14 stops into 6 stops (rec709)
thanks in advance
[Willian Praniski] "the main question basically, is how do i know i properly converted the 12-14 stops into 6 stops (rec709)"
It seems you have a bit of misunderstanding regarding dynamic range in this context.
The 12 - 14 stops of dynamic range you are referring to per RAW files relates to the ability to color grade that material as it comes straight from the camera, it does not refer to the dynamic range of material for web-viewing on YouTube.
If perfectly exposed, the RAW files straight from the camera will have the maximum dynamic range possible, affording you the maximum range of exposure latitude and color control during the process of color grading. However, any time you transcode or encode from the RAW files to a traditional video codec, and that includes exporting from Resolve or an NLE after color grading, you will lose some (or a lot) of both dynamic range and color information, depending upon the non-RAW codec you choose, and thus you will lose some (or a lot) of your ability to control color grading and exposure control during the grading process.
For example, if you transcode or encode from RAW to a hi-performance codec such as Apple Pro Res 422 or Avid DNX 185 before grading, even though these are fairly lossless codecs, you will lose some dynamic range and color information recorded in the RAW file, and though your ability to color grade will still be quite robust, you will not have the full 12 - 14 stops of exposure latitude you'll have with RAW files. That does not mean the video will look bad, it just means you'll have less digital information to work with, and thus not quite the same range of exposure or color control you would have to work with if you graded the RAW files.
With regard to encoding to h.264 for YouTube, this entire discussion really does not apply, as you would not be using those files once encoded for YouTube for additional color grading. Right? When encoding to YouTube preserving resolution and proper color gamma are normally the primary concerns, not dynamic range, as the goal is to provide a smooth and optimal viewing experience to your audience on their computers, phones, etc., not to provide them with material for further editing.
If you are providing material for stock footage download, then tell us, because that has a few different nuances to discuss.
Does this all make sense?
David Roth Weiss
David Weiss Productions
David is a Creative COW contributing editor and a forum host of the Apple Final Cut Pro forum.
Hi David, firstly thanks for your response.
Perhaps i wasn't clear enough, or maybe you didn't get my question right. Let me try to explain this again.
Firstly just to make it clear, the footage coming from my camera isn't really RAW, like the one from RED, ALEXA, BLACKMAGIC cameras. It's a XAVC-S file wrapped in .MP4 (basically a high-end version of h.264 as far as i know, because it's 50mbps instead of regular 24mbps)
So, let's say I get the file straight out of the camera (in MP4) without transcoding it, and color correct it, then grade it (the ''raw" file).
Then I finally export as a h.264 for youtube or vimeo. But as far as im concerned, most TV's and internet can only display 6 stops of dynamic range. (rec709)
So where in the process did I convert the initial 12-14 stops to 6 stops?? When I exported the graded files to h.264??
In any case, what is the best codec to export my graded footage to?? (without loosing any Dynamic Range)
I think you are still confusing the use of the dynamic range from the camera. It is not something you would look at your monitor and say "wow...look at the dynamic range on this footage...its beautiful"
The Dynamic range is simply expressing how much information you have to work with during your colour process. It might allow you to take that over exposed sky, and bring it back into a more pleasant sky, because what appeared to be clipped whites (or blacks) can be unclipped, or brought back from the dead. It doesn't mean you don't need to be aware of your exposure during the shoot, because the colourist can fix it all, because I have so much dynamic range, but it does give you some options.
As David said, this has no bearing on how it is viewed on YouTube.
Another simple answer to your question would be "As soon as you compress anything to H264 and upload to Youtube, you are losing all sorts of information. Youtube is not known for its high quality, lossless compression standards.
As for LUTs, you are basically correct. They are tools that can be used to help convert the color space from 1 thing to another. (Slog to Rec 709 for example) You ask why people recommend using them...many people don't recommend using them. If you are doing a conversion for a specific type of playback, for a projection system perhaps, a LUT might be used to help keep the look consistant.
Many people feel they can shoot their footage, apply a lut, and bam..colour is done, but it doesn't work that way. LUTs might get you to a good starting point, or they might get you to an awkward starting point that you may have to colour before or after it to get the picture to look how you want it to look. It is all up to you.
Personally, I don't typically start with a LUT.
Others will always start with a LUT and tweak from there. Personal choice. I don't think anyone is going to look at your final project and say "i'll bet you didn't use a LUT did you?"
Hmmmmmm, I believe i understand it now. Thanks for your guys patience to explain it to me, I've been doing a lot of research a lot lately, but this whole new slog2 workflow from sony has really made get somethings confused. However, I learned a lot of valuable new information, that i should perhaps already know.
And regarding the LUTs then, am I right if i say that a good colorist should be able to grade the image and maintain a consistent look without using any LUTs??
[Willian Praniski] "And regarding the LUTs then, am I right if i say that a good colorist should be able to grade the image and maintain a consistent look without using any LUTs??"
It's possible to do, but there's no harm in using actual color science to put the file in a good starting point. The problem with an actual LUT is that they can be destructive and get in your way, particularly if the material is shot poorly. Bear in mind this is still an 8-bit camera, and that alone is a big hurdle to get past.
[Marc Wielage] "It's possible to do, but there's no harm in using actual color science to put the file in a good starting point. The problem with an actual LUT is that they can be destructive and get in your way, particularly if the material is shot poorly. Bear in mind this is still an 8-bit camera, and that alone is a big hurdle to get past."
I follow your logic here to an extent. I use LUTs all the time for a variety of things but usually when speed is a priority over more control of the image. It seems many people who have hopped on the A7S bandwagon are shooting sLog3 and then dropping a Rec709 LUT on their footage as the first step in grading and in that case, you're not that gaining that much over shooting Rec709, no? Rather than a generic Rec709 LUT, why not start with a preset in your grading software (or a saved still in Resolve) so you can manipulate the settings from shot to shot rather than adding more adjustments after the LUT in the signal chain? I usually start with the look I created on set, which I had previously used to create the LUT I used for on-set monitoring but not the LUT itself because it cant be modified.
Someone earlier mentioned using a LUT works as long as you have "properly exposed footage" but again, what is properly exposed when you're talking about LOG gamma curves? Properly exposed to me is a subjective choice made based on the dynamic range of the scene and what you want the final image to look like and as such, a generic LUT would look terrible on some shots where for instance, you exposed the skin tones a little lower than normal to keep a highlight from clipping.
Now, if you were planning on printing to film emulsion or something that's another story and a good reason to apply a LUT right off the bat so you know how that specific emulsion will impact your image.
"you're not gaining that much over shooting Rec709, no?"
Not necessarily the case. The key is to know when to apply your lut inside Resolve.
If for example you were to use Resolves color management, and apply that REC709 lut , you are given a clip converted to 709, but with all the latitude of the original raw shot. The correction is not clipping as you move down the chain, as you can still add a correction in node 1 and ease back on the highlights, removing the clipping, (assuming the original shot wasn't totally over exposed) so in essence it gives you the best of both worlds.
What happens with a lot of people is they apply that generic LUT, see that the picture isn't perfect, and panic, and remove the LUT.
[Glenn Sakatch] "The correction is not clipping as you move down the chain, as you can still add a correction in node 1 and ease back on the highlights, removing the clipping, (assuming the original shot wasn't totally over exposed)"
Good point. I guess I see the value in that workflow then, though I personally would still prefer to start from scratch and use the scopes and my monitor to grade for 709. As a DP who just does color grading on some of my own projects, I'm by no means an authority on the subject and would love to know more about pro colorists' workflows. Wish there were more resources available. The Lift Gamma Gain forum is pretty good but I've found limited info about slog.
You seem to be insinuating that only non-professionals use LUTs, and anyone using them is slacking off in the technical side of things, but you are missing the point. You are still using your monitor and scopes to grade for 709. The implementation of a LUT does not change that. The point is that if you want to use these luts, you are not necessarily handcuffing yourself. You are simply getting the footage closer to a technically correct shot sooner.
Having said that, using a LUT at the wrong spot could handcuff the user, and cause clipping and loss of information.
If you want to use node 1 to get your contrast corrected, and saturation boosted manually, you are essentially doing the same thing the LUT is doing, you are just working your way through the process a bit slower.
They should not be looked at as a one click and done solution, but in many (not all) workflows the use of LUTs can get you to a basic correction quicker than formulating your own starting point. You will still add a node, and adjust to your liking, but this way the picture gets a head start before you touch it.
Sometimes i use them, sometimes i don't. Kinda depends on the footage, and the type of project i have.
[Rob Davis] "Now, if you were planning on printing to film emulsion or something that's another story and a good reason to apply a LUT right off the bat so you know how that specific emulsion will impact your image."
Actually, it has been possible for some time to drop in the LUT as the front end for the film recorder, so the final record-out interneg image is precisely set for film color space. It can be done, and in fact I've done it on a half-dozen films released on film print. We worked this way at ILM (to name just one facility).
It doesn't happen a lot these days, except for very wide-release studio pictures. For small indies and TV, they're never gonna hit film in 2017 except in very rare, unusual cases.
I get that some people cling to LUTs as a way to get to a starting place quickly, but I honestly think a PowerGrade can do just as much good. I don't have a problem working in the right color science (like RedColor or whatever), which is not the same as a LUT. DaVinci CTLs are another alternative that will hold up well and are actually better than LUTs in a lot of ways, but are not as widely used.
In the case of the Sony FS7/FS5 (not shooting RAW), would you generally do the final color grading with the native XAVC codec or would you first convert to something like ProResHQ, ungraded, and then do the color correction?
My Slog workflow is usually something like this:
1) Create LUT(s) on set for monitoring/video village while shooting
2) Import XAVC into Resolve, make basic adjustments using the LUTs from set for reference
3) Add aspect ratio blanking, anamorphic desqueeze, dual system audio syncing, etc.
4) Export h.264 dailies and ProRes LT (or proxy) for editorial
5) Final edit is done in NLE like Premiere
6) Online XAVC using XML from NLE back in Resolve
7) Final color grading
8) Final export
given the opportunity to take your sequence back to camera original vs a pro res file, i would go back to the original. Converting footage to to offline res for edit, then relinking to camera original, then transcoding to ProResHQ simply is an unnecessary step, (and wastes space) Send your list to Resolve, and tell resolve to link to the camera originals, and start color.
Thanks for your reply. I assumed this was the case but wasn't sure if converting to a less compressed codec (XAVC-L on the FS5 in particular has a good deal of compression) would give you any more wiggle room in color. For instance, I feel like h.264 footage from Canon DSLR's grades a bit easier once converted to ProRes, however with the much better 4:2:2: 10-bit codec in the Sony's (at 1080), the difference is probably negligible.