I'm a cinematography student and have just been reading information about SD and HD cameras. I had a question I was hoping you could help me with.
I understand that standard definition cameras have 4:3 chips and the 4:3 chip will stretch the frame to make it look 16:9, therefore losing quality.
I read that a digibeta camera has a 16:9 chip and it can record wide screen that squashes the image to 4:3. I was wondering how can it have a 16:9 chip when it is an SD camera? It really confused me!
Can any one help explain this in layman's terms please!
I would really appreciate it. Thank you.
Originally all standard def cameras had 4:3 shaped sensors and the camera would read the pixels out to create a 4:3 electronic picture.
Widescreen material, which meant feature films, was either transferred to video as letterbox, where you saw the whole picture but the top and bottom of the screen had black bars, or pan&scan, where the screen was filled with only a section of the film and you lost a large amount of the original film frame.
In the early 1990s widescreen tv was invented as a compromise. It allowed widescreen pictures to be displayed almost fully by stretching the same number of recorded pixels horizontally across the screen. This arrangement is called anamorphic and effectively means that if you imagine the picture as a series of pixels, those pixels are spread apart further in the horizontal direction than in the vertical direction (known as having non-square pixels). So compared to having a letterbox of your wide screen pictures in a 4:3 display, which reduces the number of pixels in the vertical direction, you use all of the available pixels in the vertical direction, so although you now have an imbalance between the resolution of detail in the horizontal and vertical directions, basically more pixels = sharper pictures. This 16:9 anamorphic tv transmits down the same broadcast path as 4:3 and is still standard def.
The early 16:9 widescreen tv pictures were principally made by adjusting the scans on a telecine transfer from film to tv, and tv cameras weren't ready for the change when widescreen tv took off. Initially what the cameras did was to ignore the top and bottom of the sensor chip, so they're looking at a 16:9 shaped bit of the sensor and interpolating the number of pixels they could read up to the required number, effectively creating a letter boxed image in the camera and then doing an electronic zoom to increase the pixel count to the full height. After a while the manufacturers came out with wider sensor chips so that that bodge wasn't necessary, by which time Digibeta was the normal full quality tv format, so the Digibeta camcorders got the new (and expensive) technology and it took a few years for the wider chips to replace the older (and cheaper) 4:3 chips in less expensive formats. As an example, the Sony PD100 was an excellent piece of kit in many ways but the softness of it's pictures compared to Digibeta when you made a 16:9 show, which was routine at the time (in the UK, at least), meant that it was treated (somewhat unfairly in my opinion) with derision by the engineers at some broadcasters.
I hope that made sense :-)
Yes I think so , thank you so much for explaining it to me Andrew. I just want to check if I understand this correctly. So in summary what you are saying is that digibeta cameras had originally had 4;3 sensors but were replaced with 16 :9 sensors?
If that is the case why are they classified as Standard Definition cameras ?
I really appreciate your help.
All the Best Vincent :)
Standard definition conforms to SMPTE standard 601 while High Definition is SMPTE standard 709. These standards differ by raster size (720x480 vs 1920x1080 or 1280x720) and colorometry. Standard def can be 4x3 or 16x9 and your receiver allows you to letterbox, side cut or unsqueeze the picture to fit. High Def is always 16x9.
Early 16x9 capable cameras would top and bottom cut the 4x3 to 16x9 thus reducing the vertical resolution, later versions preserved the full height and offered side cut to accommodate legacy 4x3 use.
Eventually all the 4x3 receivers will be replaced and only 16x9 sets in both SD and HD will remain. Another possibility of course is that all conventional "TV" sets as we know them will disappear in favor of computer monitors, which was/is the true promise of "convergence" which was the unrealized dream at the origin of the High Def standards.
I can't actually remember whether there were any DigiBeta cameras that had 4:3 chips in them. DigiBeta was launched in 1993, so that was about a year after the european "16:9 action plan" was launched. 16:9 SD got going in the UK, Australia and quite a lot of europe quite quickly so there was demand on this side of the pond for 16:9 DigiBeta in PAL right from the start, but it didn't in the USA so there wouldn't have been a marketing requirement for NTSC versions to have 16:9 chips.
I guess that researching Sony camcorders of the 1990s would provide an answer. (There have been other manufacturers in the DigiBeta market, e.g., Ampex and Thompson, who were mostly badge engineering Sony kit, Ikegami were around but I can't remember what they had in the way of 16:9 SD).
They're standard definition cameras because the pixel count and the broadcast bandwidth is the same as 4:3 standard def.
A quick google finds that the original Sony Digibeta camcorder was the DVW-700 (no mention of 16:9) and that was followed by the DVW-700WS, which was switchable for 4:3 or 16:9, so I guess that the -WS version is where the 16:9 chips came along.
In laymen's terms? Unfortunately, there aren't laymen's terms. :) It's all really geeky technical jargon. For more information, this months American Cinematographer Magazine has the details.
The word pixel is two words smashed together: picture & elements.
There are three types of pixels:
1. imaging pixels
2. digital pixels
3. display pixels
Your question is really about how imaging pixels become digital pixels and then how digital pixels become display pixels.
The imaging pixel regardless of what size it is captures light and then converts it into the digital pixel, and that is the part determined inside the camera, and set by the manufacturer.
Display pixels work out roughly like the following when resolving digital pixels:
Standard definition, for digital video, means the display pixels are 720x480.
Widescreen standard definition means the display pixels are 720x406.
High Definition means two display sizes 1280x720 & 1920x1080.
There's more to it, but that's a good start.