Is a capture and playback device a passthrough or a pitstop?
Two of you already know that I'm researching the depths of what a capture and playback device does WHAT but that I'm looking for "precise" answers in a language a 2nd grader -- me! -- would understand.
Anyway -- I've been going in circles for 2 and half weeks -- no lie. I have so much in notes -- in word, in notepad, on paper, in emails, IN MY HEAD -- that I need to be UNCOMPRESSED!
I feel like all the answers are in my notes (so many notes). But the dots aren't connecting, and part of the reason is that I'm getting opinions that seem to contradicting but in all probability are most like just not "sensitive" enough. The other part of the reason is that I'm a second grader!
The dots aren't connecting because there actually are "holes" in the lines at the beginning and end the dots are. These holes are cause by the two "parts" mentioned above. I'm stuck MAINLY on one thing! And I need an a "sensitive" answer!! -- not an industry answer! I need (and I say need respectfully) an answer sensitive the level of my mind's current capacity not to shut off the moment an unfamiliar term injects it with fear and doubt.
So here's where I'm stuck (and there's gotta be a simple way to FULLY explain this):
1. why do we say that a cap and play device transfers uncompressed when a camera compresses the data when recording it, and since the camera's decompression doesn't fully restore. a full answer if possible. i will remember your name forever. full meaning something like this (though this probably wrong): well, a camera compresses it, then a cap and play connects to it via sdi, a button is pressed and a camera begins playback, and another button is pressed, and the cap and play sucks the wind out of the sdi and the sdi vacuums the footage out of the camera's playback (yep, takes it right off the LCD), and then it takes the data, encodes it, and shoots it thru a thunderbolt pipe into a mac. oh ye and by the way -- the playback was uncompressed. we call it uncompressed because the playback just orders the lost data to come out where it has fled to and get back in line to form a full original file in all its megabytes, and the guys that aren't coming back -- well we won't even notice they're gone. and so and so. (sorry for getting carried away. thanks if you're still with me. help!)
2. when does the camera decompress -- in playback or when you press the magic "decompress" button.
3. how do aja Io, BM UltraStudio 3d, and MXO2 transfer? does it serve as a passthrough or a pitstop? does it say: hang on data! cant go to the mac yet! we must do something to you first. we must: encode? decode? x? y? z?...
4. do all of the three machines mentioned above capture from playback only?
thanks. i'm in debt to you already if you've read this far.
I can't answer all of your questions but might be able to help a little bit. Here's the thing. Whenever you playback footage it has to be decompressed/decoded (whatever you want to call it). You cannot watch compressed video. The compression/coding of video footage is only done in order to store it, when you want to access it you must reverse whatever you did into to get a full video stream. Think of a zip file. You select a document and tell your computer to zip it. It encodes the data in the file (translates a series of 0s and 1s into a shorter series of 0s and 1s) and then stores it. You can then keep it on your computer forever or email but whenever you want to read whats inside you must first unzip it back into the the long series of 0s and 1s. If you just open the zip file then you unzip it temporarily to your ram or some temp folder, if you unzip it, it create the original file and stores it permenantly on your hard drive. So whenever you hit play on a camera it is having to uncompress/decode the footage. In the case of a i/o device the camera then sends that uncompressed/decoded signal down a wire and and into the device.
Avid Certified Instructor - MC5.5
Apple Certified Trainer - FCP7
Andrew, please forgive the late response. It's just how things turned out. But I read your response almost immediately. I actually posted this question in about a dozen cow forums, and many folks took the time to help (!!). And aside from picking up a few side-gems from everyone who responded, I actually was able to get a really clear picture because of everyone's input. Yours spoke to me. Your saying "You cannot watch compressed video" was like an Aha! moment for me. Thanks a lot. I really actually understand now. Cheers Andrew. Thanks.
Andrew can you help me further? I'm still researching I/O devices and am finding only definitions for many details. If you can find the time, please do.
I'm trying to find about 3G-SDI/HDMI, 4:4:4 RGB, and conversions (and genlock if possible).
So what if I have SDI? Tougher cables, etc... But so what? What stands out? Can it transfer better quality? Is that even possible?
What is 4:4:4 RGB?
And if I'm converting between 4:4:4 and 4:2:2 to set up single-link HD-SDI output, why am I doing this? And how is it happening?
In up/down/cross conversions, do I choose one of the three (is there a button?). Can I choose NOT to choose one and to just let the data through? And how is it down? What happens to the footage? Do lines get added? And why would I need to do this? I know that in a broadcast setting, and I’m sure other settings as well, one may be given some SD footage and some HD and will need to kinda match them up. But why else?
Please find the time if you can. Thanks for before and thanks a lot anyway.
OK, first thing, RGB and YCbCr are two common colourspaces. A colourspace is simply a method of storing the colour of each pixel in an image as a series of numbers. RGB colourspace defines a pixels colour by assigning a number to how much red, green and blue there is in it. It is probably the simplest and easiest to understand colourspace because mixing primary colours to make other colours is a common idea in everyday life. YCbCr was invented to allow black and white televisions to recieve a colour signal and still work. Again it has 3 channels. The first (Y/luminance) describes the brightness of the pixels and therefore makes a black and white image. The colour difference channels (chrominance blue and chrominance red) allow you to add/subtract blue and red to/from the pixel to give it colour.
A second benefit of YCbCr is that you can reduce the resolution of the colour difference channels and the human eye doesnt really notice. This is called chroma subsampling and it is a very common way of reducing the amount of data in a videa stream. Where the resolution of the colour difference channels are reduced to half that of the luminance channel we denote 4:2:2 to show that the first channel has twice as much data as the other two. RGB can't really be treated this way which is why it usually has to be full 4:4:4. The reason we have different cables (HDSDI, dual link, 3G) is that you can only fit a 1080p 422 signal down a single 1.5G BNC cable. In order to get 444 YCbCr or RGB out of a camera you have to use two cables or the fairly new 3G cable which fits twice as much data as a traditional 1.5G cable.
Hope this helps.
Avid Certified Instructor - MC5.5
Apple Certified Trainer - FCP7
So good! Thanks a lot Andrew.