Hello Creative Cow community,
I have searched the threads and haven't found a comfortable answer to my question, so I could use a little help.
We are just about to go into post on feature doc and I think we have worked out the work flow with Media Composer (mac) and PluralEyes (ver.3.1.1) but I wanted to reach out and confirm this is the most efficient work flow. Footage is a mix of 5D and 7D footage.
Here is the breakdown of the project:
We have about 20 interviews with 2 camera angles and dual record audio, so we are sitting on about 50 hrs of interview footage plus another 10/15 hrs of b roll. The b roll will not be put through PluralEyes.
The work flow as we have worked out seems to be dictated by PluralEyes at this point and is as follows:
1. Transcode media per camera 1 and 2 and lay in timeline [V1 & V2, A1-A4, with dual record on A5 & A6] and send off to sync in PluralEyes.
2. Drag and drop sync'd AAF file into a bin (which creates the sequence file)
3. Working in the new "sync'd sequence" make selects and copy each select to clipboard and drop clipboard file into a new 'selects' bin. The result is having multiple selects as sequences versus actual sub clips
4. Copy and past from the "sync'd selects' sequences to the master edit sequence
Is this the best work flow or only work flow when using PluralEyes or are we missing something?
Did you use a slate for this? Frankly, if there is a slate it might be faster to sync manually than the whole process of using PluralEyes with Media Composer. If not, then the following adjustments to your workflow:
Once AAF comes back from PluralEyes, go through sequence, Mark IN and OUT for each "take"in the sequence and drop it into a bin. This creates a new sequence that has only one event. Rename it to whatever that source is (scene/take, etc.) Once done, highlight all the single event sequences and do "AutoSync" - this will create a subclip that is much easier to use as source when editing, also allows for sync breaks and such to be used.
My thoughts too. I find that the good old slate and manual sync is frankly faster, reliable
and is a breeze in MC.
The less peripherical apps involved, the better. Also, It's not imposible that you'd have to fine tune again after the PluralEyes stage wich makes the all process somewhere annoying and "unstable".
There is nothing more simple and reliable as a clapperboard.
Not that PluralEyes doesn't have a place, as it can be invaluable, but for me, and especially with Media Composer, it is the "Hail Mary" pass of choices I would make, meaning if nothing else was provided it can be a lifesaver.
But in this scenario, planned interview type shooting and not run and gun scenarios and such there could easily have been planned:
1. Lavs and a boom via a mixer directly to the Canon 5D - for great controlled sound and no syncing needed.
2. Clapboard for each interview, not only for easy sync identification, but logging info, etc. and is the cheapest solution for high quality audio and guaranteed sync.
The issue seems to be that productions see this as the first means of syncing, which it shouldn't be. I have a meeting with Red Giant planned at NAB to go over the issues I have that can be fixed and make the whole Media Composer much easier that I hope they will consider.
With a little more planning (and budget) you can always use a timecode audio recorder synced to a Horita LTC generator that gets recorded to the Canon 5D - then a simple "Read Audio Timecode" with a batch AutoSync could have also been considered.