Mid 80s-Early 90s “film-to-video transfer” look in AE
Hypothetically, I’m filming a short that parodies television from the late 80s and early 90s. The piece is shot on DSLR in such a way that it mimics the look of 35mm film(composition, lighting, grain and all) but I want the finished product to look like a single-camera show like Pee-Wee’s Playhouse, Beverly Hills 90210, or Melrose Place which were all shot on film but transferred to one-inch videotape for the final edit. Ultimately, I turn to Red Giant’s VHS plugin since it has a sizable amount of presets and individual features that mimic videotape formats. How would I go about achieving the one-inch tape look digitally with that plugin?
PS: if there are any industry veterans out there who know the process well, your input is greatly appreciated in advance
As it happens I've done a LOT of degrading, two films where I did substantial degrading in were "The Fourth Kind" and "Evidence". I'm also a former broadcast engineer, so I know the inner workings of both 1" machines and the telecine methods used back in the 80s & 90s (ugh, dating myself, LOL).
I don't know the Red Giant plug in — I create my own plug -ins/scripts/expressions/FX chains. It's really a matter of the look you're going for, and how much time and effort you want to put into it. As such, I can't help you with the RedGiant plug in, but I can give you some general advice and things to look for or consider:
How Things Worked (most of the time) specific to the era and quality you mentioned:
1) CRT DISPLAYS: Back in those days before the wide spread of LCD, TVs and monitors were CRT type, wherein an electron beam was directed to a glass screen covered in phosphor, and the electrons would excite the phosphor toe cause light to be emitted. And it was all analog, so there was a lot of "trickery" to make images work.
One important trick is the nature of interlace. While people refer to NTSC video as "30 frames a second", in fact if you were using an analog video camera, you would see 60 "frames" per second, each at half the vertical resolution but all 60 at full horizontal resolution. These "half res frames" are called fields. As the electron beam would scan across the CRT display, it would SKIP every other line on the first field, then fill in those missing lines on the second field. The reason this was done was to keep the perceptual image refresh rate high (60 per second) in order to reduce flicker, much the same way the shutter in a film projector has three blades so each film frame is flashed on screen three times before the projector pulls down the next frame.
2) NTSC COLOR: (aka Never The Same Color) This much maligned color encoding system was developed in the 1950s, and was hampered somewhat by the FCC's insistence that any color system be backwards compatible with existing black and white televisions. So basically it's a black and white luminance signal with color frosting smeared on top. The RGB colors were encoded onto the Luminance signal using QAM*, with each of R, G, and B being 120 degrees apart from each other (as you can see on a vector scope). But all three colors were not transmitted, only two color channels called Y and C were, and the third was "derived" from those two and the Luminance signal.
*QAM essentially allows you to combine two amplitude modulated signals, and and phase shifts relative to a subcarrier to separate them again. The subcarrier used for the color signal was 3.58 MHz, which ended up being 227.5 sine-wave cycles per line — but that's not the same as "pixels", as it was a continuously modulating analog signal.
Since the color was combined with the luminance to form a single signal, it was called "composite video".
3) TELECINE: The high end telecine transfer method was the "Flying Spot" RankCintel machine. The system worked by using a small green CRT like you'd see on an oscilloscope, and it scanned just like a CRT monitor wold, but there was no image on it, just the flying dot, which was focused onto the frame of film. A light sensor on the other side of the piece of film would then sample the light as the dot scanned over the frame.
An important aspect of the system was that there was no pull down claw - the film moved continuously, and each frame was scanned as it moved past the gate. The scanning raster on the CRT would adjust for the film's motion.
4) 1" VIDEO TAPE: This was the pinnacle of composite video technology. Composite because the color signal was recorded encoded with the QAM 3.58 method, recorded known as "color under", where the video was FM modulated from 5 to 10 MHz (highband), and the color was at its 3.58 MHz "under" the frequency of the Luminance signal (but all recorded together at an angle on the tape by heads on a spinning drum).
5) FILM: When shooting on film most TV shows shot at 24 frames per second** for a number of reasons. First, it looks more "filmic", and second it uses less film which saves $$$. In fact many TV shows shot in "3 perf" where they used 35mm cameras with a modified movement that would only pull down 3 instead of the usual 4 perfs of film. Because the RanCintel had no pull down claw and scanned the film electronically, this was easily accommodated. On many shows the 3 perf film negative was conformed in a traditional negative cut, and the resulting single strip was what was telecined — the film negative then was the international master, as the PAL and SECAM versions could just be telecined from the O neg.
Now to get 24 fps film to fit into 30 fps video, remember that it's really 60 fields per second. So to this leads to the telecine pulldown: 4 film frames fit into 5 video frames (or ten video fields), such that films frame 1 goes to video fields A1 and A2, film frame 2 goes to video B1, B2, C1, film 3 goes to Vid C2, D1, film 4 goes to D2, E1, D2.
** Commercials were sometimes shot at 30 fps, but TV shows were/are more typically 24 (really 23.976) in the US.
The point of this little trip down memory lane, is knowing a bit about how the images were created can lead us to some of the common artifacts and characteristics.
1) To match the "motion feel" you want to shoot at 24 FPS, then use After Effects to do a 3:2 pulldown to create a 30 fps interlaced (60 fields per second) video. Also it is important if you are adding "grain" that the grain pattern match the frame rate of the film and be static with all the fields that a particular film frame is pulled down to.
2) The CRT Scanlines and/or phosphor dot pattern are commonly emulated to give a "video feel", but too often feels "synthetic" or put on. Aa more natural method is to take WHITE NTSC VIDEO and multiply transfer mode it onto the target image.
2) The NTSC chroma signal created a number of artifacts. One was the "dot crawl". As the subcarrier is an odd number relative to the number of lines (2/455), the cycle of the subcarrier would move horizontally slightly every frame. On highly saturated sharp edges you could see the "dots crawl". You can emulate this my multiplying a 45 degree crosshatch. But don't go too far with this one.
3) The nature of QAM encoding resulted in less than great color reproduction. Saturated reds in particular would bleed, bloom, and smear. (Even today when casting directors are casting cars, they ask for no red).
4) There was a lot image dynamic range compression done particularly for broadcast. Highlights were pushed into clipping, and "soft clipped" so that they rolled into clip. This was partly to keep the image "viewable" even when reception was less than ideal (which was always unless you had cable).
5) But also I think you;d find that colors and luminance did not have "equal" nor well behaved transfer curves, and the gamma for each color primary was in effect a bit different (at lest perceptually) in more complex images.
6) The colorspace is defined by the SMPTE-C icc profile. It should be noted that SMPTE-C is a little different than the original NTSC spec from 1953 (which was never achievable given the technology). The phosphors that are defined in SMPTE C became the de-facto standard in the late 60s, and and were adopted officially by SMPTE circa 1987.
Notice that SMPTE C is very close to Rec709.
7) 1" C Video tape machines usually had time base correctors to clean up the signal, but without there is dihedral error where each line of the scanned video does not exactly line up with the previous or next line.
VFX & Title Supervisor
I have to say that your advice was dense but helpful to a youngun like me.
I just have two questions for you:
1) How would I go about making sure my grain asset is static during each of my pulled-down frames? Is baking it in and the pulling down my footage the answer?
2) How would I go about recording NTSC video of pure whiteness? Will the signal be intact if I record it off a CRT TV and transfer it to digitally made footage? Will it affect the dynamic range? Will it create color bleeding?
1) Personally for real looking grain I have elements that are 24 fps footage of a gray neutral card that I can use to add grain - the grain footage is 24 fps, and the underlying footage is 24 fps.
If you are using the GRAIN plug in in after effects, you would simply have your 24 fps footage inside a 24 fps comp, so that the grain plug in would only update on a whole frame. Using After Effects, the pull down and interlacing can be done automatically in the output module.
2) Depends on the effect you are looking for. What I did in the past was digitize 75% white raster. Though I've also done blue or green shooting the actual screen, or digitizing. I developed a synthetic LCD monitor pixel matrix for the screens in "Dark Skies". I just mentioned the white screen as a fast down and dirty way to get something with real artifacts and subtle scan lines with some noise and motion.
VFX & Title Supervisor
I found this video on YT which serves as a near perfect example of the look that I want to achieve. Plus, the uploader put the entire workflow in the description, so I can tweak it to fit my specifications. Would you consider this a suitable digital alternative?
I looked at his setup, pretty clever.
I'll mention that on layer 3, "2) Fast Blur = 50 / blur the stripes" I think he meant to say that fast blur should be set to horizontal only, otherwise the entire layer would not have "lines".
ALSO, you could add a displacement effect to actually make the image skew per the lines, driven from the layer 3 noise.
VFX & Title Supervisor