ADOBE AFTER EFFECTS: Forum Expressions Tutorials Creative Cloud

Is a capture and playback device a passthrough or a pitstop?

COW Forums : Adobe After Effects

<< PREVIOUS   •   FAQ   •   VIEW ALL   •   PRINT   •   NEXT >>
Frank BlackIs a capture and playback device a passthrough or a pitstop?
by on Dec 23, 2011 at 4:04:47 am

Hi guys,

Some of you already know that I'm researching the depths of what a capture and playback device does WHAT but that I'm looking for "precise" answers in a language a 2nd grader -- me! -- would understand.

Anyway -- I've been going in circles for 2 and half weeks -- no lie. I have so much in notes -- in word, in notepad, on paper, in emails, IN MY HEAD -- that I need to be UNCOMPRESSED!

I feel like all the answers are in my notes (so many notes). But the dots aren't connecting, and part of the reason is that I'm getting opinions that seem to contradicting but in all probability are most like just not "sensitive" enough. The other part of the reason is that I'm a second grader!

The dots aren't connecting because there actually are "holes" in the lines at the beginning and end the dots are. These holes are cause by the two "parts" mentioned above. I'm stuck MAINLY on one thing! And I need an a "sensitive" answer!! -- not an industry answer! I need (and I say need respectfully) an answer sensitive the level of my mind's current capacity not to shut off the moment an unfamiliar term injects it with fear and doubt.

So here's where I'm stuck (and there's gotta be a simple way to FULLY explain this):

1. why do we say that a cap and play device transfers uncompressed when a camera compresses the data when recording it, and since the camera's decompression doesn't fully restore. a full answer if possible. i will remember your name forever. full meaning something like this (though this probably wrong): well, a camera compresses it, then a cap and play connects to it via sdi, a button is pressed and a camera begins playback, and another button is pressed, and the cap and play sucks the wind out of the sdi and the sdi vacuums the footage out of the camera's playback (yep, takes it right off the LCD), and then it takes the data, encodes it, and shoots it thru a thunderbolt pipe into a mac. oh ye and by the way -- the playback was uncompressed. we call it uncompressed because the playback just orders the lost data to come out where it has fled to and get back in line to form a full original file in all its megabytes, and the guys that aren't coming back -- well we won't even notice they're gone. and so and so. (sorry for getting carried away. thanks if you're still with me. help!)

2. when does the camera decompress -- in playback or when you press the magic "decompress" button.

3. how do aja Io, BM UltraStudio 3d, and MXO2 transfer? does it serve as a passthrough or a pitstop? does it say: hang on data! cant go to the mac yet! we must do something to you first. we must: encode? decode? x? y? z?...

4. do all of the three machines mentioned above capture from playback only?


thanks. i'm in debt to you already if you've read this far.


val


Return to posts index

Walter SoykaRe: Is a capture and playback device a passthrough or a pitstop?
by on Dec 23, 2011 at 2:39:21 pm

[Frank Black] "1. why do we say that a cap and play device transfers uncompressed when a camera compresses the data when recording it, and since the camera's decompression doesn't fully restore."

Uncompressed doesn't mean "original." It just means that there has been no effort made to mathematically reduce the signal in question. (As an aside, compression can be either lossless, in which case the decompressed video is mathematically identical to the pre-compression video, or compression can be lossy, in which case information is thrown out to reduce size or bandwidth, and the decompressed video shows degradation when compared to the pre-compression video).

In other words, the capture device is adding no additional compression to the incoming signal.

Let's assume we have a camera that records to MPEG-2 (a lossy compression scheme). When that camera plays back MPEG-2 over its video outputs, it must decompress the MPEG-2 and send an uncompressed video signal out, because it can't send the literal MPEG-2 stream directly to analog component or SDI devices.

You're right that this process doesn't recover all the data thrown away by the initial MPEG-2 compression. MPEG-2 artifacts introduced during compression will still exist in the new uncompressed video out signal.

Let's call the initial uncompressed signal (before record) A. Let's call the compressed MPEG-2 stream (at record) B. Let's call the decompressed MPEG-2 signal (at playback) C.

None of these streams are literally the same, but C will look identical to B.

Why go through all the hassle? Two reasons. Uncompressed video is huge and hard to store, so that's why many cameras compress video for record. Secondly, video transmission standards themselves (like analog component or SDI) work only with uncompressed video.


[Frank Black] "when does the camera decompress -- in playback or when you press the magic "decompress" button."

As above, at playback. The video must be decompressed to a new, full-bandwidth, uncompressed signal to be played over a video output.

If you're talking about a file-based camera, then decompression happens whenever the file is read.


[Frank Black] "how do aja Io, BM UltraStudio 3d, and MXO2 transfer? does it serve as a passthrough or a pitstop? does it say: hang on data! cant go to the mac yet! we must do something to you first. we must: encode? decode? x? y? z?..."

The video signal must be encoded (converted from electrical impulses to data -- this is not the same as compression). A capture device with analog inputs digitizes the analog input (by sampling it, similarly to how an audio CD works). SDI is already technically encoded, but I think most of the capture cards re-encode the data.


[Frank Black] "4. do all of the three machines mentioned above capture from playback only?"

These devices don't necessarily know what playback is; they are able to capture any incoming video signal.

A lot of folks with less expensive camcorders take video out and record on an external device specifically to avoid the MPEG-2 compression/decompression cycle and the visual degradation that comes with it.

Is there some you're looking to relate this question to After Effects?

Walter Soyka
Principal & Designer at Keen Live
Motion Graphics, Widescreen Events, Presentation Design, and Consulting
RenderBreak Blog - What I'm thinking when my workstation's thinking
Creative Cow Forum Host: Live & Stage Events


Return to posts index

Walter SoykaRe: Is a capture and playback device a passthrough or a pitstop?
by on Dec 23, 2011 at 2:48:24 pm

I just noticed you cross-posted this question in over a dozen different forums here.

Several contributors like myself have already taken time out of their days to answer your question -- but it had already been answered elsewhere. I suspect several more people will answer, too, because none of us know that the question was asked and answered already somewhere else.

From Grazing Etiquette at the COW [link]:
Cross posting is using cut, copy paste to put your question into more than one forum at a time. I would say that two is the max if you tell people that you are cross posting and if the post is a legitimate "bubble post” one that could be answered by any of the two forums.


Walter Soyka
Principal & Designer at Keen Live
Motion Graphics, Widescreen Events, Presentation Design, and Consulting
RenderBreak Blog - What I'm thinking when my workstation's thinking
Creative Cow Forum Host: Live & Stage Events


Return to posts index


Frank BlackRe: Is a capture and playback device a passthrough or a pitstop?
by on Dec 28, 2011 at 2:27:44 am

Hey Walter... hi --

I'm sorry I took so long to respond. Like I said in the other forums -- it just worked out that way. I read your response the day after you wrote it. Like I said to a some of the other guys -- your response was one of those that completely spoke to me.

Don't be offended please by the fact that I posted in so many rooms. I actually probably did feel a bit of unethical before you even mentioned the rule, but I have to say, I was able to put A REALLY CLEAR picture together thanks to the time a lot of the guys spent. You spent from among the longest. You really broke it down to me and you didn't have to. I noticed that as soon as I started reading. You really helped me out. I'm very happy to know uncompressed now. I'm also motivated to learn more. Thanks a lot. Don't mind if I come back with further questions. I have some already, and I've had them from a couple of days now. I do wanna formulate them better though. Anyway, thanks a lot Walter (so much man, you have no idea). Best, Val


Return to posts index

Walter SoykaRe: Is a capture and playback device a passthrough or a pitstop?
by on Dec 28, 2011 at 3:04:52 pm

No worries, and I'm glad we could help.

Walter Soyka
Principal & Designer at Keen Live
Motion Graphics, Widescreen Events, Presentation Design, and Consulting
RenderBreak Blog - What I'm thinking when my workstation's thinking
Creative Cow Forum Host: Live & Stage Events


Return to posts index

Frank BlackRe: Is a capture and playback device a passthrough or a pitstop?
by on Dec 29, 2011 at 5:28:25 am

Hi Walter. Can you please help me further?

I understand uncompressed now, but am learning further about I/O devices. I've found answers to some details, but am having difficulty with others. I don't mean to take advantage. Can you please take the time? Can you make me understand?


Can you tell me what motion-adaptive conversion is?

What is uncompressed 10-bit 4:4:4 RGB and 4:2:2 YUV? Someone had already explained 4:4:4 RGB but I would love your explanation.

If an I/O device can take a single link SDI of 4:2:2, and a single link 3G SDI 4:4:4, then does that mean I get to choose? Is there a button?

How does genlock work? Is it automatic, or do buttons have to be pressed?

How does up/down/cross conversion happen? Do I choose it? Is there a button or is it done in a software? Can I avoid it? Why would I use it? One example I learned is that it's for matching up looks if a workflow is in HD and some SD footage needs to be add. But are there more reasons, at least common ones? And what actually happens -- do more lines get created/some get removed?

Is converting between 4:4:4 and 4:2:2 to set up single-link HD-SDI monitoring and output done because theres no computer or device with two SDI inputs that the I/O can output to (since the above I/Os have 2 SDI outs also.)

Which I/O device do you use? Why did you choose it?

Can you share with me anything else (even if it isnt directly related)?

Please take the time Walter. If you can, please do. Thanks either way. Later man


Return to posts index


Walter SoykaRe: Is a capture and playback device a passthrough or a pitstop?
by on Dec 29, 2011 at 3:39:04 pm

I'll answer these questions, but there's a lot to go through here and it'll take me a while to put together a decent response.

Can you provide any context for your questions? It might help me explain these concepts to you better if I can relate them back to your specific situation.

Walter Soyka
Principal & Designer at Keen Live
Motion Graphics, Widescreen Events, Presentation Design, and Consulting
RenderBreak Blog - What I'm thinking when my workstation's thinking
Creative Cow Forum Host: Live & Stage Events


Return to posts index

Frank BlackRe: Is a capture and playback device a passthrough or a pitstop?
by on Dec 30, 2011 at 1:32:29 am

Walter, thanks a lot.

I'm researching the IoXT, the UltraStudio 3D, and the MX02 to be able to know the difference between them, down to the details, for professional workflow. To do this, I'm first learning what the factors that they all share in common actually mean, so I could understand the details on an in-depth-enough level. When I first started, I got stuck on uncompressed for almost 3 weeks. Now I'm going down the list of other features that I/O devices share. The ones I asked you about yesterday are the ones I'm having difficulty with.

Today I was able to learn that 3G-SDI only means that the connection will handle the highest quality HD if such footage is streamed, and that there's no button to press to select whether "3G" will be activated to accept such footage. Can you please confirm if this is correct. I learned that 3G just means that enough bandwidth is available to support the highest possible quality input.

Again thank you, and if it's fine with you, I'll rephrase my questions here based on things I learned today, adding and subtracting to and from each question. And adding two questions (sorry bro) at the end.

1. What is uncompressed 10-bit 4:4:4 RGB and 4:2:2 YUV?

2. Dual-link SDI 4:4:4 / Single-link 3G SDI 4:4:4 -- is this possible only with a "live" output? Is 4:4:4 only possible with a live output?

3. If an I/O device can take a single link SDI of 4:2:2, and a single link 3G SDI 4:4:4, then does that mean I get to choose?... Does this mean that a single link SDI that doesn't support 3Gbps can only take 4:2:2?

4. Does hardware-based "10-bit" up/down/cross conversion mean that no codec is used, that the conversion is done in uncompressed format?

5. UltraStudio 3D has HDMI for capture and output. But it also has SDI. So would HDMI be used, in the case of capture for example, if the camera doesn't have SDI ports?

6. What is motion-adaptive conversion?

7. How does genlock work? Is it automatic, or do buttons have to be pressed?

8. How does up/down/cross conversion happen? Do I choose it? Is there a button or is it done in a software? Can I avoid it? Why would I use it? One example I learned is that it's for matching up looks if a workflow is in HD and some SD footage needs to be add. But are there more reasons, at least common ones? And what actually happens -- do more lines get created/some get removed?

9. Is converting between 4:4:4 and 4:2:2 to set up single-link HD-SDI monitoring and output done because theres no computer or device with two SDI inputs that the I/O can output to (since the above I/Os have 2 SDI outs also.)

**And this special question: Thunderbolt transfers at 10Gbps. So two hours of footage can be transfered in about 30 secs from the I/O device to a Mac. But how long does it take for the two hours of footage to reach the I/O through SDI? Is it also 30 secs? Or is it two hours since the transfer is done in playback and therefore in real time?

**What I/O device do you use, and why did you choose it?

***What do you edit with :)

Walter, I promise to help anyone the way you helped me with uncompressed.

Man, sorry for the length of this email.


Return to posts index

Walter SoykaRe: Is a capture and playback device a passthrough or a pitstop?
by on Dec 30, 2011 at 4:18:33 am

All right, welcome to Digital Video Engineering 101. Class is now in session!

There's a lot of technical information behind your questions. I'll try to give answers that touch on the technical issues, but without being totally overwhelming. Let me know if my answers raise more questions.



[Frank Black] "1. What is uncompressed 10-bit 4:4:4 RGB and 4:2:2 YUV? "

RGB and YUV are two different ways to encode color information.

RGB stores three values for each pixel, representing its intensity in red, green, and blue channels. Mixing these red, green, and blue channels together creates both color and brightness for each pixel.

YUV also uses three values for each pixel: one channel for luminance (the Y channel, or brightness) and two chrominance channels (the U and V channels, or color). YUV is almost always a misnomer -- almost every time you see YUV, it's really referring to YPbPr (analog) or YCbCr (digital). With these, the Cb or Pb channels indicate the deviation from gray on a blue/yellow color axis, and the Cr or Pr channels indicate the deviation from gray on a red/cyan axis.

That's a lot of techno-babble to say this: you can lay out colors in a plane. With two coordinates on that plane (the values of the Cb and Cr channels), you can identify a point which corresponds to a specific color. You can then raise or lower the brightness of that specific color (with the value of the Y or luminance channel).

Whether you're using RGB or YCbCr, you do this for every pixel in the raster to create the image.

RGB and YCbCr can be used to describe the same colors (though the mathematics of expressing the entire range of YCbCr colors in RGB can get a little complicated). The real difference between RGB and YCbCr is this: RGB distributes brightness information throughout all three channels, while YCbCr isolates brightness in a single channel.

Why is this important? Unlike RGB, YCbCr allows for chroma subsampling. Human vision is much more sensitive to changes in brightness than it is to changes in color, so we can reduce the bandwidth required for video by including less color information than luminance information. With 4:2:2 YCbCr, we use only 2 samples in each chrominance channel for every 4 samples in the luminance channel. Essentially, the brightness information is carried at full resolution, but the color information is carried at half-resolution since the human eye is vastly less sensitive to it.

The 10-bit part refers to the bit depth, or degree of precision, with which we can describe colors. Imagine the spectrum: Red, orange, yellow, green, blue, indigo, violet. If you only can only express those colors in the range of 1 to 7, you can't get any of the gradations in between. If you can express those colors in the range of 1 to 100, or 1 to 1000, though, you can get more and more of the in-between colors, too. The more bits in the bit depth figure, the more subtle gradations in color you can express, and the less banding will be visible in your image.



[Frank Black] "2. Dual-link SDI 4:4:4 / Single-link 3G SDI 4:4:4 -- is this possible only with a "live" output? Is 4:4:4 only possible with a live output?"
You shouldn't try to make a distinction between live and pre-recorded outputs; as far as a video device is concerned, there's no difference. It's just plumbing.

That said, there are very few 4:4:4 recorders.



[Frank Black] "3a. If an I/O device can take a single link SDI of 4:2:2, and a single link 3G SDI 4:4:4, then does that mean I get to choose?"


Do you get to choose? Not really -- it depends what devices you're connecting. If both devices support 3G SDI, then they can pass RGB or 4:4:4. If either device only supports HD-SDI, then you're limited to 4:2:2 YCbCr only.



[Frank Black] "3b. Does this mean that a single link SDI that doesn't support 3Gbps can only take 4:2:2?"
Yes.




[Frank Black] "4. Does hardware-based "10-bit" up/down/cross conversion mean that no codec is used, that the conversion is done in uncompressed format?"


Nearly all video processing (in both hardware and software) is done uncompressed, because the frame must be decompressed to give the processor an image buffer to work with. Without that decompression step, it's just data -- not video or an image that can be manipulated.



[Frank Black] "5. UltraStudio 3D has HDMI for capture and output. But it also has SDI. So would HDMI be used, in the case of capture for example, if the camera doesn't have SDI ports?"

Yes.



[Frank Black] "6. What is motion-adaptive conversion?"

I assume you're talking about frame rate conversion?

Consider going from 24 frames per second to 30 frames per second: not only does 30 fps video have more frames in each second, but they occur at different points in time than the 24 fps original. The system must interpolate frames that don't exist in the original.

With motion-adaptive conversion, the system tries to analyze the movement of each pixel or feature from one frame to the next. By determining the direction and speed of the motion (the motion vector) of these parts of the image, the system is able to more accurately interpolate images that happen at points in time in between known frames. This usually provides superior results to simple frame blending (which is soft or smeary) or frame repeating (which is jittery), but it may produce blobs in areas of hard-to-estimate motion.



[Frank Black] "7. How does genlock work? Is it automatic, or do buttons have to be pressed?

Genlock lets multiple video sources be synchronized. Without genlock, switching between unsynced sources could result in jumping images while the display readjusts to the new timing signals. It should be automatic on devices that support it.



[Frank Black] "8. How does up/down/cross conversion happen? Do I choose it? Is there a button or is it done in a software? Can I avoid it? Why would I use it? One example I learned is that it's for matching up looks if a workflow is in HD and some SD footage needs to be add. But are there more reasons, at least common ones? And what actually happens -- do more lines get created/some get removed?"

Capture cards have inputs, frame buffers, and outputs. The frame buffer is typically locked to the input, but is independent of the outputs. For up/down/cross conversion, you specify a different format for output than input.

You'd use up/down/cross conversion if you needed video from one format (like 720p59.94) to be captured as another format (like 1080i29.97). The card will resize the image and alter the frame rate as necessary. This only works on incoming video signals; you can't use the card to to convert a file within the computer.



[Frank Black] "9. Is converting between 4:4:4 and 4:2:2 to set up single-link HD-SDI monitoring and output done because theres no computer or device with two SDI inputs that the I/O can output to (since the above I/Os have 2 SDI outs also.)

No -- there are 3G HD-SDI or dual-link HD-SDI monitors on the market, but they are expensive. Very few productions actually need 4:4:4 or RGB. 4:2:2 YCbCr is very high-quality and commonly used.



[Frank Black] "**And this special question: Thunderbolt transfers at 10Gbps. So two hours of footage can be transfered in about 30 secs from the I/O device to a Mac. But how long does it take for the two hours of footage to reach the I/O through SDI? Is it also 30 secs? Or is it two hours since the transfer is done in playback and therefore in real time?

Real time. SDI moves video, not data, so it is always real time.



[Frank Black] "**What I/O device do you use, and why did you choose it?

I used to use an AJA Kona 3, because AJA's customer service is extraordinary.

I currently use a BMD DeckLink Extreme 3D. I use DaVinci Resolve, and Resolve only supports Blackmagic Design cards.



[Frank Black] "***What do you edit with :)

I used to edit with FCP7, but now I'm using Avid and Premiere Pro on both Macs and PCs.

Most of my work isn't editorial, though -- I primarily use my capture card for monitoring on my HD-SDI Flanders Scientific grading monitor from After Effects and Resolve.

Walter Soyka
Principal & Designer at Keen Live
Motion Graphics, Widescreen Events, Presentation Design, and Consulting
RenderBreak Blog - What I'm thinking when my workstation's thinking
Creative Cow Forum Host: Live & Stage Events


Return to posts index


Frank BlackRe: Is a capture and playback device a passthrough or a pitstop?
by on Dec 30, 2011 at 9:15:47 pm

Walter thank you very much. (Thank you very much)

...I do have some follow-up questions to your answers.


W.S. -- That's a lot of techno-babble to say this: you can lay out colors in a plane. With two coordinates on that plane (the values of the Cb and Cr channels), you can identify a point which corresponds to a specific color. You can then raise or lower the brightness of that specific color (with the value of the Y or luminance channel).

How do you "raise or lower?" Is there a button?


W.S. -- Why is this important? Unlike RGB, YCbCr allows for chroma subsampling. Human vision is much more sensitive to changes in brightness than it is to changes in color, so we can reduce the bandwidth required for video by including less color information than luminance information. With 4:2:2 YCbCr, we use only 2 samples in each chrominance channel for every 4 samples in the luminance channel. Essentially, the brightness information is carried at full resolution, but the color information is carried at half-resolution since the human eye is vastly less sensitive to it.

Is there a way to choose b/w RGB and YUV or is it based on what the I/O device offers and even firstly what the camera shoots?


W.S. -- The 10-bit part refers to the bit depth, or degree of precision, with which we can describe colors. Imagine the spectrum: Red, orange, yellow, green, blue, indigo, violet. If you only can only express those colors in the range of 1 to 7, you can't get any of the gradations in between. If you can express those colors in the range of 1 to 100, or 1 to 1000, though, you can get more and more of the in-between colors, too. The more bits in the bit depth figure, the more subtle gradations in color you can express, and the less banding will be visible in your image.

1. Is 10-bit the highest? What's the lowest?
2. How do I know what the "range" is for each bit (or is it a minimum of plural bits)?
3. By "subtle," do you mean good? less noticeable?
4. Is the higher the range, the more "precise" the final color?


W.S. -- That said, there are very few 4:4:4 recorders.

1. What different variations can RGB be in -- 4:4:4, 4:2:2 (4:2:1)?
2. What will determine whether I get 4:4:4 -- the camera? the I/O device?......
..... (your answer to a couple of questions above may already have answered this....

....And, you also said in your response -- "Do you get to choose? Not really -- it depends what devices you're connecting. If both devices support 3G SDI, then they can pass RGB or 4:4:4...." So does this mean camera records 444 no matter what, and that if it has 3G SDI outs then it can output 444, and if the I/O device has 3G SDI then it can input 444? And that at the same time the I/O has the choice of inputting YUV? ).

3. And how will 444 be passed to the NLE? Will it still be 444? And if 422 is passed, will it still be 422?


Thank you. Thank you for your explanation of motion adaptive, thank you for everything.


Return to posts index

Walter SoykaRe: Is a capture and playback device a passthrough or a pitstop?
by on Jan 2, 2012 at 4:31:13 pm

[Frank Black] "W.S. -- That's a lot of techno-babble to say this: you can lay out colors in a plane. With two coordinates on that plane (the values of the Cb and Cr channels), you can identify a point which corresponds to a specific color. You can then raise or lower the brightness of that specific color (with the value of the Y or luminance channel).

How do you "raise or lower?" Is there a button?"


No button -- I was trying to describe how the computer uses YCbCr to represent color. Replace "you" with "the video system" in the paragraph above and it might make more sense.

In other words, the color is specified by the combination of the Cb and Cr channels, and that color is then displayed with the appropriate brightness as represented by the Y channel.



[Frank Black] "W.S. -- Why is this important? Unlike RGB, YCbCr allows for chroma subsampling. Human vision is much more sensitive to changes in brightness than it is to changes in color, so we can reduce the bandwidth required for video by including less color information than luminance information. With 4:2:2 YCbCr, we use only 2 samples in each chrominance channel for every 4 samples in the luminance channel. Essentially, the brightness information is carried at full resolution, but the color information is carried at half-resolution since the human eye is vastly less sensitive to it.

Is there a way to choose b/w RGB and YUV or is it based on what the I/O device offers and even firstly what the camera shoots?"


This is based on the devices. Nearly all video devices are YUV (sic); only a handful of high-end tools work in native RGB.




[Frank Black] "1. Is 10-bit the highest? What's the lowest?"

10-bit is not the theoretical highest bit depth, but it is the highest in practice. 8-bit is the "standard" depth.

What's the difference? A bit lets a computer represent two values: on, or off. For every bit we add, we double the number of values we can represent.

8 bits (or 2 to the 8th power) can represent 256 values (0-255). With RGB, since we have three channels, we can represent 256*256*256 colors -- or 16.7 million different colors.

A 10-bit system (2 to the 10th power) can represent 1,024 values (0-1023). Three channels of 10-bit RGB can represent about 1 trillion colors.

As you can see, that's a lot more precision in color for just a couple more bits! What does it mean? The gamut is the same -- that is, you cannot make a greener green with 10-bit than you can with 8-bit; however, you can get more unique shades of green in between yellow and blue.

The difference between 8-bit and 10-bit video really comes into play for synthetic imagery (CG) and in effects or extreme color correction work.

Don't get too hung up on this -- you can still do very professional-looking work in 8-bit, and all consumer displays are only 8-bit anyway.



[Frank Black] "1. What different variations can RGB be in -- 4:4:4, 4:2:2 (4:2:1)?"

Those numbers and colons indicate chroma subsampling; because RGB doesn't have separate chroma channels, it cannot be subsampled. Although many people call RGB 4:4:4, this isn't technically accurate as there can be no subsampling.

With 4:2:2 YCbCr, for every 4 horizontal luminance samples, there are 2 horizontal chrominance samples in Cb and 2 chrominance samples in Cr. In other words, the color is sampled at half the resolution as the brightness is, and so the video only requires 2/3 the bandwidth of the fully-sampled signal with very little noticeable degradation. If you visualized the three channels, the Y or brightness channel would be full-width, and the Cb and Cr channels would be squeezed horizontally by 50%. If we were talking about a 1920x1080 frame, there would be 1920 Y samples, 960 Cb samples, and 960 Cr samples for each line.

Other common subsampling schemes include 4:1:1 (where each chrominance channel is sampled at a quarter of the horizontal resolution as the luminance channel), 4:2:0 (where each chrominance channel is sampled at half the horizontal resolution as the luminance channel, but on alternating lines, effectively halving the vertical resolution and halving the horizontal resolution, instead of quartering the horizontal resolution), and 3:1:1 (used by Sony's HDCAM, it uses 1440 horizontal samples for luminance, 480 samples for Cb, and 480 samples for Cr in a 1920x1080 raster).

See Karl Soulé's Color Subsampling, or What is 4:4:4 or 4:2:2? [link] for a visual explanation of subsampling.

Regardless of how the camera works, many systems (including After Effects and FCPX) work only in RGB. This means that YCbCr will be converted to RGB for processing.



[Frank Black] "And, you also said in your response -- "Do you get to choose? Not really -- it depends what devices you're connecting. If both devices support 3G SDI, then they can pass RGB or 4:4:4...." So does this mean camera records 444 no matter what, and that if it has 3G SDI outs then it can output 444, and if the I/O device has 3G SDI then it can input 444? And that at the same time the I/O has the choice of inputting YUV? )."

What camera are you using? Unless it costs more than a nice new car, it does not record or output 4:4:4.

Walter Soyka
Principal & Designer at Keen Live
Motion Graphics, Widescreen Events, Presentation Design, and Consulting
RenderBreak Blog - What I'm thinking when my workstation's thinking
Creative Cow Forum Host: Live & Stage Events


Return to posts index

Frank BlackRe: Is a capture and playback device a passthrough or a pitstop?
by on Jan 3, 2012 at 1:42:39 am

Hey Walter, I gotta really thank you man, you've really schooled me with these several posts. Everything you taught me, I intended on learning, but I never hoped to learn it on such a level this time around, so thanks a lot, and thanks bringing it down to my level otherwise it would've sent me back into circles.

Per your last comment/question about what camera I'm using, I now understand. I thought up to now based on what I've been gathering from you that 4:4:4 is what ANY camera captures.

Checked out the link you provided for subsampling and printed it out (your posts have been printed and analyzed too :))

Checked out your site too -- you're in NY!! Me too. Very cool. I thought you were in Europe based on the times I've gotten a couple of responses. Well anyway, thanks a lot man. I'll be looking out for your responses to people's questions. You really school me on I/O devices. Thanks. Be well man.


Return to posts index


Walter SoykaRe: Is a capture and playback device a passthrough or a pitstop?
by on Jan 3, 2012 at 9:02:51 pm

[Frank Black] "Hey Walter, I gotta really thank you man, you've really schooled me with these several posts. Everything you taught me, I intended on learning, but I never hoped to learn it on such a level this time around, so thanks a lot, and thanks bringing it down to my level otherwise it would've sent me back into circles. "

You're welcome. I'm glad I was able to help a bit!

Walter Soyka
Principal & Designer at Keen Live
Motion Graphics, Widescreen Events, Presentation Design, and Consulting
RenderBreak Blog - What I'm thinking when my workstation's thinking
Creative Cow Forum Host: Live & Stage Events


Return to posts index

Frank BlackRe: Is a capture and playback device a passthrough or a pitstop?
by on Jan 4, 2012 at 1:05:58 am

[Walter] You're welcome. I'm glad I was able to help a bit!

More like 10 bits :)


Return to posts index

<< PREVIOUS   •   VIEW ALL   •   PRINT   •   NEXT >>
© 2017 CreativeCOW.net All Rights Reserved
[TOP]