FORUMS: list search recent posts

Why EVER use interlaced?

COW Forums : Apple Final Cut Pro Legacy

<< PREVIOUS   •   FAQ   •   VIEW ALL   •   PRINT   •   NEXT >>
Lance Drake
Why EVER use interlaced?
on May 28, 2010 at 6:51:04 pm

Can anyone cite ANY reason why you'd not choose to remove interlacing in an FCP project?

Yes, I understand the NTSC 60i format displays interlaced images - but my question is, "Why not remove interlacing from the original?"

This has got to sound like a dumb question - but, to me, everything looks so much better without interlacing - what's the reason for ever retaining it in your footage that comes out of the camera?"

Thanks!


Return to posts index

Shane Ross
Re: Why EVER use interlaced?
on May 28, 2010 at 7:04:33 pm

Sports, news...reality shows...some docs. People like smooth fluid motion for many things. If you don't want it, if you want the progressive feel, then shoot progressive and work progressive. No one twisting your arm to work interlaced. But there are several shows that do want that, so they shoot and edit that way.



Shane



GETTING ORGANIZED WITH FINAL CUT PRO DVD...don't miss it.
Read my blog, Little Frog in High Def


Return to posts index

Lance Drake
Re: Why EVER use interlaced?
on May 28, 2010 at 7:18:34 pm

Hi Shane - That's interesting - I guess my question comes from the fact I shoot with a Sony HD camera that creates interlaced footage to tape - and then, when I look at individual frames, there's very often some artifacts showing up along horizontal edges that makes the video look more like 'video' - and, as you suggest, that's not my desired intent or style as I create what I WISH was shot on film - or that I had access to a nice progressive camera - so, you've answered my question - which I understand to be, "It's only important if your customer cares" - which makes sense.

Thank you



Return to posts index


Shane Ross
Re: Why EVER use interlaced?
on May 28, 2010 at 7:33:49 pm

Tape based cameras...many of them...still shoot interlaced. That is the nature of tape. The only ones that don't are the ones that shoot a true progressive format, 720p60...like the Panasonic Varicam. 1080 is an INTERLACED format by nature, so if you want to shoot progressive, say 24p, then it needs to shoot 24p over 30...and the tape is interlaced because, well, 1080i is an interlaced format. YOu can REMOVE the pulldown to get to 1080p 23.98...but with HDV, a GOP format, you can't do that until you convert to a non-GOP format like ProREs...since GOP isn't an actual frame format...it is 3 real frames and then a Group of Pictures in between.



Shane



GETTING ORGANIZED WITH FINAL CUT PRO DVD...don't miss it.
Read my blog, Little Frog in High Def


Return to posts index

Michael Gissing
Re: Why EVER use interlaced?
on May 28, 2010 at 11:20:50 pm


I can think of three good reasons to shoot and post produce interlaced.

Firstly smoother motion. 24 & 25 fps progressive places limitations on speed of pans and the capturing of fast moving images. A colleague who shoots underwater footage showed me some comparisons between 60i and 24psf on his HDCam and the 60i looked so much better. I work on docos and lots of editors like to speed manipulate to create slo mos. These are not scripted or shot at suitable frame rates or shutter speeds and slo mos from interlaced footage are smoother as a general rule.

Titles. Scrolling credits in interlace are much smoother and give much greater latitude for speed. These days we are being shoe-horned into lengths for credit rolls and the speed becomes determined by how many credits. I struggle to get progressive credit rolls to work unless I can control the speed which means losing control of the length. Clients don't understand and the broadcasters think credit roll lengths are more important than the content.

Broadcast. All broadcasters world wide are still beaming out standard def interlaced pictures. 99% of my work is for international broadcasters so the final tape is always interlaced.

If you like the progressive look then shoot it in the camera and take the time and trouble to understanding shutter speeds. Plan and shoot slo mos at higher frame rates. From there you can edit in interlaced final timelines for broadcast delivery and get smooth credit rolls. Deinterlacing in post is losing resolution. Many people see deinterlacing as a way of fixing a field order problem. It might fix the problem but cost you resolution. There are better ways to fix interlace problems which are usually operator error.



Return to posts index

Lance Drake
Re: Why EVER use interlaced?
on May 29, 2010 at 12:45:52 am

Thank you so much for taking the time to ad so much info to the discussion.

For me, the question of losing resolution is not an issue as it seems that VIDEO has TOO MUCH resolution - at least when subjectively compared against the creamy smooth look of film stock. For me, losing some resolution is a small price to pay for also losing the moving-edge artifacts that yell VID-EEE-OOO!

What I probably need to do is move into the 21st century and get a camera that shoots both progressive AND interlaced and saves the full res imagery to a card [and compress it into some MPEG space saver format].

The missing ingredient in my situation is money. But thanks for the insights of the people who are working at a level somewhat higher than I am.



Return to posts index


Bouke Vahl
Re: Why EVER use interlaced?
on May 29, 2010 at 11:14:11 am

[Lance Drake] "losing the moving-edge artifacts"

You still don't get it.
These things are not artifacts. You won't see them on an interlaced monitor.
Interlaced means two half resolution images in one.
We call them 'fields'

If you watch this on a proper interlaced monitor, only one of the two is displayed at a given time.
(For this you need an IO card.)

Thus, Interlaced IS 'reduced' resolution.
On still images, you have full resolution. On motion, you have double the framerate at half the resolution.

Now moving images shot interlaced look horrible on a progressive monitor, and especially when you freeze frame it.
This is why a decent NLE displays only one field when you pause it.

If your output is for computer monitors, you can create a half resoltuion / double speed movie trough AE.
AE (if you interpret footage right) can render each field to a frame.
Thus, your 30i becomes 60p.

(Of course you need a bit of horsepower to play back those files)

And then you enter a new problem. 60p is close to the refresh rate of a common flatscreen, but it isn't locked. Thus every now and then a frame is updated half way the refresh of your screen, showing you two frames at the same time.
This gives a 'jerkey' look, and it aint pretty.
No real solutions for that at the moment...

(at least not that i'm aware of)



Bouke

http://www.videotoolshed.com/
smart tools for video pros


Return to posts index

Tom Brooks
Re: Why EVER use interlaced?
on May 29, 2010 at 12:04:43 pm

[Bouke Vahl] "Now moving images shot interlaced look horrible on a progressive monitor, and especially when you freeze frame it. "

Does it really look so horrible? I watch it all the time on my LCD TV at home. True, almost all of it is pure garbage, and maybe I'm conditioned to accept it, but I don't perceive artifacts (or call it crappy deinterlacing if you will) when I watch NBC. The TV, which displays progressive images, combines the fields in a way that is...acceptable. Does anyone watch TV on a CRT anymore?

Why do we have so many formats, anyway? If the US broadcasts in i59 or p59 and Europe broadcasts in i50 and movies are p24, why all these p30 variations? It's interesting to see that some of you use them a lot, even for broadcast.



Return to posts index

John Heagy
Re: Why EVER use interlaced?
on May 29, 2010 at 12:25:16 am

[Shane Ross] "1080 is an INTERLACED format by nature"

That's not correct, we shoot 1080p30 (29.97psf) HDCam with our Sony F900s when called for. 1080p29.97 and 1080p30.00 are both part of the SMPTE HDTV standards.

The American LeMans Monterey "DocuRace", airing this Sat on CBS at 1:30pm ET, has some 1080p30 in it. This show is a potpourri of every imaginable format - 1080i60, 1080p30, 1080p24/60, 720p60, even some PAL 25/24/60 widescreen SD.

Check it out or DVR it.
http://www.racer.com/cbs-takes-docudrama-approach-to-monterey-alms-broadcas...

John Heagy


Return to posts index


Rafael Amador
Re: Why EVER use interlaced?
on May 29, 2010 at 1:09:36 am

[John Heagy] "Shane Ross] "1080 is an INTERLACED format by nature"

That's not correct, we shoot 1080p30 (29.97psf) HDCam with our Sony F900s when called for. 1080p29.97 and 1080p30.00 are both part of the SMPTE HDTV standards. "

That 's correct.
When you watch 1080p30 you are really watching 1080i60.
When you are shooting 1080p30 with the SONY, you are getting 1080i60 out of your HD/SD-SDI.
Is what is call Psf: Progressive segmented frame. The picture is fully progressive but is sent as Interlaced.
rafael



http://www.nagavideo.com


Return to posts index

John Heagy
Re: Why EVER use interlaced?
on May 29, 2010 at 2:49:20 am

[Rafael Amador] "When you watch 1080p30 you are really watching 1080i60."

I knew that was coming...

FCP seqs/files and broadcast formats are handled differently. 1080i60 is a broadcast format, and a FCP seq setting, that can contain 30p. However, when talking about progressive camera files, they're not segmented. When one shoots 1080p30 with a Canon 5D, or Pana 2700, it will be truly "p" at the file level. The F900 30p was a poor example on my part.

The whole interlace "issue" is so hard for people to understand... so I jump to "correction" when I see blanket statements that can be misinterpreted... sorry Shane.

We use 1080i FCP seqs even with 30p files because of the "truly" interlaced material we normally work with. Unfortunately it's all too easy for an editor to mistakingly click "ok" when the seq setting mismatch pops up when a 30p file is the first onto a FCP 1080i60 timeline. This will set the Field Dominance to "None" any force any renders to 30p.

John Heagy






Return to posts index

Rafael Amador
Re: Why EVER use interlaced?
on May 29, 2010 at 6:12:56 am

John,
Cameras and NLEs are proprietary. The manufacturers can do whatever they want inside the machine or the application, as long as the output was standard.
Shane was refereeing to the "Standard VIDEO 1080".
SRLs shoot progressive, but if they would be able to output an standard 1080 stream, this would be psf.
rafael


http://www.nagavideo.com


Return to posts index


John Heagy
Re: Why EVER use interlaced?
on May 29, 2010 at 4:04:32 pm

Absolutely correct, not arguing that... but understanding the difference between broadcast standards and file "proprietary" workflows and delivery are equally important. Meeting broadcast standards will become less important, over time, as people shift to "online" delivery.

Apple requires all HD to be 720p24 for iTunes - that's not a broadcast standard - and no VTR or video card will output it. You'll not find a single online video site that "broadcasts" 1080i60. If you see interlaced video online... they don't know what they're doing.

The below clip is an example of how not to do it, but is ironic, as the person who encoded it did know how to maintain interlacing when scaling video, something Compressor can't do. In this case maintaing the 60i standard was the wrong thing to do... it needed to be de-interlaced to 30p.

http://www.comcast.net/video/-danica-patrick-and-amani-toomer-go-karting-/6...

We are arguing over semantics and context, but the devil is always in the details.

John Heagy


Return to posts index

gary adcock
Re: Why EVER use interlaced?
on May 31, 2010 at 2:55:31 pm

[John Heagy] "That's not correct, we shoot 1080p30 (29.97psf) HDCam with our Sony F900s when called for. 1080p29.97 and 1080p30.00 are both part of the SMPTE HDTV standards. "

I am with Shane on this one.

1080 is always transported as an interlaced signal- as that is what PsF is - an interlaced transmission and distribution signal that can interpreted as either progressive or interlace depending on whether the receiver can understand progressive.

gary adcock
Studio37
HD & Film Consultation
Post and Production Workflows for the Digitally Inclined
Chicago, IL

http://blogs.creativecow.net/24640



Return to posts index

Bouke Vahl
Re: Why EVER use interlaced?
on May 31, 2010 at 3:07:57 pm

This is not correct.
ANY device understands progressive.
A progressive signal in an interlaced stream only means that there is no time difference between the fields.

Displaying Interlace correctly is the issue here.



Bouke

http://www.videotoolshed.com/
smart tools for video pros


Return to posts index


gary adcock
Re: Why EVER use interlaced?
on May 31, 2010 at 3:42:47 pm

[Bouke Vahl] "This is not correct. ANY device understands progressive. "

Sorry Brouke,
for the record you cannot send a "True P" signal to a CRT in any way shape or form.


I have fought this argument many times (even John said it was a bad example of what he was trying to say) SMPTE SPEC for delivery does include P vs PsF debate but in reality not too many people ran into the P vs PsF until RED came on the scene, until that time all cameras were delivering PsF and noone was the wiser.


Segmented frame was designed to deliver progressive content over the existing interlaced transmission system, however it is the receiving device that determines whether the processing will be interlaced or progressive for playback.

Displays for the most part are dumb devices and do not care about progressive or interlace for the most part Sony has always cheated the signal naming on their own displays whereas Panasonic as always showed the signal type.


gary adcock
Studio37
HD & Film Consultation
Post and Production Workflows for the Digitally Inclined
Chicago, IL

http://blogs.creativecow.net/24640



Return to posts index

Bouke Vahl
Re: Why EVER use interlaced?
on May 31, 2010 at 3:52:27 pm

Gary,
Please enlighten me about P vs PsF.

I think i know my stuff pretty well, this is new to me...
(and way beyond the OP's question, but that is not important now)

The way i see it, you can put a full frame progressive signal in an interlaced stream. You cannot put an interlaced stream to an output device that is not interlaced.
(well, you can, but an interlaced frame will show both fields at the same time, wich causes the lines that some refer to as 'artifacts')



Bouke

http://www.videotoolshed.com/
smart tools for video pros


Return to posts index

John Heagy
Re: Why EVER use interlaced?
on Jun 1, 2010 at 3:04:20 am

Ok.. let's step back and define some things as terms are being used out of context. Basically file formats vs base band video. Let's look at a typical 24p HD file to video path.

PsF is an HD-SDI base band video signal used to feed 24p VTRs and the like. If you have a FCP 24p ProRes seq, you will need to use a Kona3 24Psf video output preset to go to an HDCam deck set the same way. Now, since Psf is only a way to transport progressive frames split into 2 fields (segmented) there is no such thing as a FCP 24Psf seq preset, as 24fr progressive media is truly progressive, and not processed in field segments inside FCP.

So we have a FCP seg that is 24p and a video HD-SDI output that is 24Psf... agreed?

Now let's take a 1080i60 timeline with media that has true field motion. That goes to tape with a 1080i 29.97 Kona 3 output setting. If I deinterlace this media, it will of course go to tape with the same preset, essentially 30p going to a 60i tape. Again no 1080Psf30 seq preset... that would be the same 1080i setting with field dom set to "None"

Now... Sony HDCam's do have a 29.97 Psf system setting. I've never tried it... never needed it... but it's there. On he FCP side... there is no 29.97Psf Kona3 output setting. My question is: What's the difference between recording 1080p 29.97 media to an HDcam with a system setting of 1080i 29.97 (59.97 really) and one with 1080Psf 29.97? I don't think there is any difference, just the lack of field 2 editing I would assume. In all practical purposes 1080i60 = 1080Psf30. This goes to Bouke's point and is a question I've always wondered about.

John Heagy




Return to posts index

gary adcock
Re: Why EVER use interlaced?
on Jun 1, 2010 at 3:04:10 pm

[John Heagy] "On he FCP side... there is no 29.97Psf Kona3 output setting."

I have had this argument with AJA for years,
first response was that nobody shoots 30p, then the response- why does anyone shoot 30p?

But I do not have an answer.

IMHO is it like the reason that there has never been a 1080 25Psf setting either (as AJA lists the 1080 as 25i) The signal as defined over the pipe in the PAL world is at 25sf and is always interlaced whereas there are devices that playback 24p ( 24.0 and 23.98) as P not just as PsF.




gary adcock
Studio37
HD & Film Consultation
Post and Production Workflows for the Digitally Inclined
Chicago, IL

http://blogs.creativecow.net/24640



Return to posts index

John Heagy
Re: Why EVER use interlaced?
on Jun 1, 2010 at 4:28:03 pm

But the question remains: how is 30Psf different from 60i. The Wiki page simply describes an interlaced signal containing 30p. 60i does this already, what does 30Psf gain one? Does it contain metadata in the stream that would help an MPEG2 encoder, or invoke 2:2 on a display. I imagine the reason Aja doesn't include 30Psf is there is no difference.

John Heagy


Return to posts index

gary adcock
Re: Why EVER use interlaced?
on Jun 1, 2010 at 2:51:29 pm

[Bouke Vahl] "Please enlighten me about P vs PsF. "

Lets start with a simple wiki page

http://en.wikipedia.org/wiki/Progressive_segmented_frame

I quote:

With PsF, a progressive frame is divided into two segments, with the odd lines in one segment and the even lines in the other segment. Technically, the segments are equivalent to interlaced fields, but unlike native interlaced video, there is no motion between the two fields that make up the video frame: both fields represent the same instant in time. This technique allows for a progressive picture to be processed through the same electronic circuitry that is used to store, process and route interlaced video.

The PsF technique is similar to 2:2 pulldown, which is widely used in 50 Hz television systems to broadcast 25 frame/s progressive material, but is rarely employed in 60 Hz systems as there is very little content of progressive 30 frame/s material. The 2:2 pulldown scheme had originally been designed for interlaced displays, so fine vertical details are usually filtered out to minimize interline twitter. PsF has been designed for transporting progressive content and therefore has no such filtering.


gary adcock
Studio37
HD & Film Consultation
Post and Production Workflows for the Digitally Inclined
Chicago, IL

http://blogs.creativecow.net/24640



Return to posts index

Bouke Vahl
Re: Why EVER use interlaced?
on Jun 1, 2010 at 3:40:18 pm

well, yes. This is what i was saying...

What my question / our difference was:
You stated that not all displays can show progressive, while i stated that not all displays can show interlaced correctly...



Bouke

http://www.videotoolshed.com/
smart tools for video pros


Return to posts index

Rafael Amador
Re: Why EVER use interlaced?
on Jun 1, 2010 at 4:07:15 pm

In short: Psf only happens in cables and tapes.
No sense when talking about a digital file.
rafael

http://www.nagavideo.com


Return to posts index

John Heagy
Re: Why EVER use interlaced?
on Jun 1, 2010 at 4:45:47 pm

[Rafael Amador] "n short: Psf only happens in cables and tapes.
No sense when talking about a digital file. "


Precisely! This is the context that should be mentioned went speaking of 1080p and interlace.



Return to posts index

Bouke Vahl
Re: Why EVER use interlaced?
on Jun 1, 2010 at 4:52:24 pm

precisely NOT.
Interlacing in a digital file, you get the 'artefacts' where the OP was complaining about.

And now i'm tired of this discussion, as it jumps all over the place.

Peace out,




Bouke

http://www.videotoolshed.com/
smart tools for video pros


Return to posts index

Glenn Kenny
Re: Why EVER use interlaced?
on Jun 2, 2010 at 3:06:50 am

[gary adcock] "I am with Shane on this one.

1080 is always transported as an interlaced signal- as that is what PsF is - an interlaced transmission and distribution signal that can interpreted as either progressive or interlace depending on whether the receiver can understand progressive. "




I disagree, John is correct on this one.

1080 is not transported as an interlaced signal, unless it was originated as an interlaced signal. If a camera is shot at 23.98p, 24p 23.98psf or 24psf, the signal is progressive, not interlaced. The psf signal is transported and recorded as what could be viewed as a TDM (Time Division Multiplex) representation of the progressive signal, where half (odd lines) are read out of the frame buffer and placed into the stream during one field time, then the other half (even lines) are read from the frame buffer and placed into the transport stream during the next field time. It could have been done by reading the top half of the buffer (lines 1-540), then the bottom half (lines 541-1080) and have had the same result, but the odd/even line approach allowed an interlaced monitor to display the progressive frame without having to have an (at the time) expensive frame buffer before the CRT. It is important to understand that since all that is being done is manipulating the timing of the transport of each line, but not the information of of each line, there is no change to the original signal, so therefor it stays progressive. Interlacing would only occur if there was some difference between the two halves of the frames being applied during the transmission, which does not happen.

In the end any display will interpret the psf signal as progressive, since that is what it is. It may (if it is a CRT) display it as odd/even lines, but it is still progressive, with no temporal difference between the odd/even lines. If the signal is interlace, the display will, if it is a CRT, display it as interlace, since that is what it is. There could be a temporal difference between the odd/even lines because of this, depending on the content (still or motion). If it is a more modern display, it will most likely display it as progressive in either case, by de-interlacing and scaling to fill the display. The means and methods of that process is very dependent on the make/model of the display, so the look can vary greatly with the same content.

In the end, there is no difference between psf and "true progressive" with the exception of the transport stream. When the final signal is displayed or processed, they are identical.

Sorry to be so long winded, and maybe overly technical, but hopefully this will clear some confusion and not add more.

Glenn Kenny


Return to posts index

Tom Brooks
Re: Why EVER use interlaced?
on Jun 2, 2010 at 11:25:15 am

[Glenn Kenny] "In the end any display will interpret the psf signal as progressive, since that is what it is. It may (if it is a CRT) display it as odd/even lines, but it is still progressive, with no temporal difference between the odd/even lines. If the signal is interlace, the display will, if it is a CRT, display it as interlace, since that is what it is. There could be a temporal difference between the odd/even lines because of this, depending on the content (still or motion). If it is a more modern display, it will most likely display it as progressive in either case, by de-interlacing and scaling to fill the display. The means and methods of that process is very dependent on the make/model of the display, so the look can vary greatly with the same content."

Glenn,
Thanks for your explanation. It gets into an area that I've often found difficult to get my head around and to collect all the facts surrounding it. I understand that PsF was developed in part to reduce the bandwidth of transmission of progressive material. By squirting out only half of the image at a time, the pipe transporting it can be smaller. If the whole frame squirts out at once, even though the total amount of data is the same in the end, that big squirt requires a fatter pipe.

Another thing that seems often overlooked is that truly interlaced transmission is displayed by interlaced devices (such as a ten-year-old Sony PVM monitor) as a constant stream of interlaced fields. These devices can't do any sort of 2:2 pulldown, frame buffering, or deinterlacing. Unless I'm mistaken, they simply display fields as they get them. The result is interlace "artifacts" in all cases. When the transmission is i59 all of the fields have a temporal difference. When the transmission is p29 (please forgive the shorthand) every third field will have a temporal difference from the one it is interlaced with. Hence, the artifacts, which are readily seen when I play p29 material through my Kona card to a CRT monitor.
- Tom



Return to posts index

Rafael Amador
Re: Why EVER use interlaced?
on Jun 2, 2010 at 12:38:37 pm

[Tom Brooks] " I understand that PsF was developed in part to reduce the bandwidth of transmission of progressive material. By squirting out only half of the image at a time, the pipe transporting it can be smaller. If the whole frame squirts out at once, even though the total amount of data is the same in the end, that big squirt requires a fatter pipe. "
The band with is exactly the same.
Split or not, you pass 1920x1080 pixels per second.
rafael



http://www.nagavideo.com


Return to posts index

John Heagy
Re: Why EVER use interlaced?
on Jun 2, 2010 at 11:00:57 pm


[Rafael Amador] "The band with is exactly the same.
Split or not, you pass 1920x1080 pixels per second. "


Correct...

Fields = little squirts
Frames = big squirts

Interlacing is a form of compression to avoid 60 frames per second. Instead it's 30 interlaced frames per second... aka: 60i.
It preserves temporal resolution at the expense of spatial. 60lbs in a 30lb bag.

John Heagy





Return to posts index

<< PREVIOUS   •   VIEW ALL   •   PRINT   •   NEXT >>
© 2017 CreativeCOW.net All Rights Reserved
[TOP]