Is there a fix for desaturation in H.264 encoding?
Hi! I'm new to this forum, and just getting into video work. I've noticed that projects turned into H.264 files with Adobe Media Encoder come out with colors desaturated. Is there a way to get around this?
Oversaturate your footage or use a different container for compression. One of the drawbacks of h.264
Hi, I'm really new to this so I understand if no one want to explain stuff to me. What is a "container" for compression and how would I get a different one? Do you just mean compression format? To upload to the web I need H.264. Is this a known issue with H.264?
If you compare a digital video file to video tape they are pretty similar in design. I believe this is because each successive recording format is just trying to improve on the predecessor.
Film stored images across most of the film area. Audio was linear track optically exposed that looked much like the waveforms you see today.
VHS and Beta were very similar in size and how they worked. The differences were the amount of info they stored and way they wrote the information. Basically the tape format was a linear audio track written at the bottom of the tape, with the video written diagonally across the remaining space. The degree of rake and tape speed meant more or less information could be stored on the tape. More information meant better picture resolution, color, and improved sound.
Todays digital video file operate in a very similar fashion to its predecessors. You can break the file down to the base components like this:
Audio Codec - Since CD days we still use pretty much the same audio format for storage called PCM or Pulse Code Modulation. Advancements in audio have produced small gains in terms of storing audio with more resolution. CD audio was 44KHz@16-bit, DVD came out with a standard of 48KHz@16-bit, audiophiles/music pushed the recording up to 96khz@24-bit for todays standards. Much like the tape format above, the KHz is the tape speed or resolution, and the bit depth is the amount of info you can store for the one sample. So every second the audio recorder is sampling 96 thousand times a sec, and for that one instance noting 24 bits of level information.
Level information can be easily described like the using a pencil on paper, and trying to represent a curve while writing only stairs. The smaller you make your stairs look, the better your curve seems to represent the original.
Compression is layered on top of all this. Compression like MP3, WMA, AAC, FLAC, ...the list continues, is all just reducing the amount of audio information that needs to be stored so we can fit more on digital recording devices. Formats like AC-3 is just MP3 audio with more than 2 tracks ability. Dolby developed this and became a standard in the entertainment because it saves space on the the DVD so that there is more space for video data. There were other standards like Windows Media that also had this ability and better sound quality, but royalty payments were likely under bid by Dolby, or Dolby had better industry connections than a software(Nerds) Company like MS.
Video Codec - Video works very much like recording multi track audio. The picture is broken down in base components like Red, Green, Blue, then recorded using various "encoding" methods like RGB or YUV.
VHS wrote mostly B&W information with a little color information. This trend continues because engineers know that what is most important to viewer, and recording standards can only store so much info. This balance is what divides Pro Gear from Consumer Gear. Pros want to store as much info as possible, consumers just want to see their kid at a birthday and not pay to much to do it.
A CODEC is just a mathematical representation of recording picture. Similar to digital audio recording and compression, frame rate like 30p vs 60p is the frequency of notation. Bit rate is the amount of information stored for each frame. Increase frame size and bitrate needs to grow. Again like audio recording, engineers use Bit Depth to their advantage by recording picture at 8-bits to save storage, and in some cases just to simply work on low cost/powered devices like an IPhone.
Video codecs can be interchanged inside the container, depending on what the user or business model requires. This can be low bitrate consumer to keep storage and costs down, or professional levels that record as much info as possible.
Pro Codecs are like ProRes, DNxHD, DNxHR, Sony HDCAM/SStP, HDCAM422, HDCAM-EX, Cineform, and high profiles of h.264.
Todays 10-16bit video formats are just storing more information in a custom/prototype engineered method of recording and playing that info back. Greater bit depth of your recorded image the more refined sunset gradients appear, and the richer your eye perceives the color. Of course you need specialized display hardware to playback these formats, as much as you need Pro cameras to shoot these formats. RED, ARRI, Blackmagic are examples of cameras that shoot in these formats. The digital projection at your local cinema is an example of a display device that operates at these levels. Your home UHD TV is more than likely operating at 8-bit levels since this is the cheapest way to produce hardware. You have to be very discerning with equipment interfaces and display types to navigate the shell games hardware manufacturers put up with marketing.
Today Consumer formats are like AVCHD(Blueray Standard)/h.264, h.265/HEVC/VP9. The line between consumer and Pro starts to blur greatly here. 4K standard calls for high bitrates and 10-bit video color. Manufacturers are continuing to produce cheap hardware utilizing 8-bit color because they already have working manufacturing lines for the older HD gear. They simply push the resolution up 4K using a very similar standards. They also maintain business model by separating Pro gear out at the higher standard.
SDCards, like tape, have a maximum sustainable write speeds. So consumer video standards are aligned with what the cheap hardware can do. Even YouTube works like this. Google has to support so many streams per second that it is hard to even comprehend. Choosing a standard bitrate allows infrastructure costs to be manageable. It might not be the best, but it allows everybody to be served.
Profiles are another aspect of Codecs that is hard to understand, but comes again from building on the past. Codec profiles stem from hardware requirements that rigidly determine what can played back or recorded. Early edit systems required adherence to this standard. Today's computers allow the user to select a standard, and most modern devices will playback that information. Even Digital Cinema systems will play back a consumer HD signal, but why would you choose to display like that if you can have a lot more quality for your presentation. AVC/h.264 in lower profiles only supports HD and 8-bit video. High Profiles of AVC/h.264 supports 4K at high bitrates and 10-bit color. Basically just "turning up" the operating level h.264.
h.265/HEVC is vary similar to h.264 in that is it simply a better compressor with the ability to store even greater color information. Better compression is the main thing with h.265/HEVC since YouTube players and ISPs like Comcast would love to carry less bandwidth to maximize profits on installed infrastructure. DVD manufacturers love 265/HEVC because the bandwidth fits inside existing SD card, and DVD standards. Simply changing out the video codec chip on a DVD player and you basically have a 4k Blu-ray. Video streaming has put a huge dent in DVD sales, and so most hardware manufacturers do not see future in making a 4K DVD player. 4K DVD is still be talked about because ISPs fail to deliver enough bandwidth to the home, and remote users to stream even h265 4k videos.
Container - Container is simply a data stream format designed to hold both the audio and video streams in one file. There is header information written that tells the player what is coming down the pipe and what format it will be in. Some playback systems, like Cinema DCP, actually use 2 separate files for Audio and video.
AVI, MOV, MP4, MXF are examples of "CODEC" containers. Containers like proceeding specifications (VHS, CD, DVD) maindate what can be "encoded" inside the format.
AVI & MOV can hold almost any audio or video codec to a point.
.MP4 was designed to be a display format and generally does not allow uncompressed audio. The standard determines this, there are exceptions by manufacturers extending the "standard."
.MXF is a newer format designed to allow a professional feature set. Features like very high bit rate codecs, along side multiple channels of audio. Those two things combined are what brings good picture with 20 channels of sound to the theatre. Manufacturers like Apple, AVID, and Sony make their own combinations of video codec, audio codec inside the .MXF standard. Then give them a marketing name and sell that to filmmakers as the "best system that meets their needs." Basically though the competing formats have engineering levels of difference in quality. ProRes, HDCAM, DNxHD, DNxHR, XDCAM, Cineform pretty much use existing technologies and then sell the end user that their method is the best solution.
Who determines how fast the industry progresses? Legal side of things generally determine costs. The "Consumer" comes next by going for the cheap bang for the buck vs. selecting a true evolution and gain in performance or quality.