3D Tracking help with slow motion?
I am shooting a time-warping, slow motion, "matrix bullet time" (or whatever you want to call it) sequence using twenty cameras positioned stationary in a semicircle around a subject. The frames from each of these cameras will be combined to make a smooth rotating shot around the subject at a single frozen moment in time. To make the motion as smooth as possible, I plan on interpolating frames with Twixtor. In addition to using this, I was wondering if 3D motion tracking could help Twixtor interpret frames more accurately. Could I possible track the 3D environment in which I am filming using a program such as SynthEyes and then use Twixtor to intelligently slow down the tracked subject? I may be completely off on this, but I thought I'd give it a try. Just trying to get the best quality slow motion from my footage as possible, without using layers. Any suggestions would be appreciated.
Thanks a lot
So to recap - you have roughly 19 other views and 180 degrees to travel. 180/19 = about 9.5 degrees of rotation per key-image.
In the past what I have seen with such scenario is anything from it works surprisingly well to it's real hard.
Things to be careful with:
- objects too close to camera
- too many layers of action
(ideally if the foreground was over green screen and this is captured in two passes it removes a lot of such problems).
Technically such problem is related to a large region in an image (more then a few pixels) not visible in the other one when you in-between. What is called an occlusion.
Like last year I helped someone in Europe do something similar to what you describe and two simple lines animated on the background made it look pretty good (was car racers after the race popping champagne frozen time style and the jumping happy car racer at certain angles was creating a lot of warpiness)
But then someone else had a kitchen scene with corridors to other rooms, multiple actors... and that created a lot of roto work.
So, if you shot some tests but not the final yet - I can certainly take a look - sometimes simple thing like a prop at the wrong place transforms a 2 days job into a 2 weeks job... I live at techsupport at revisionfx dot com.
As for 3D cam recon. One thing that would help is to make sure the source stills are stablized - the rotation by itself is smooth as shakes looks weird when slowed-down.
From there what would help is to know the set of software package you have access to. Without going into details, for example in the Softimage version there is a way to capture all the tracking points from the 3D recon. Yet these points are not necessarily more accurate then Twixtor. In Fusion there is a way to link points to such 3D points as well, but it's a bit more manual, would possibly augmented by scripting. While in many applications our only way to absorb pre-tracking inputs will be via 12 point parameters and mattes for layers... In some cases, this becomes a form of assited morph job and the RE:Flex workflow (which includes our morphing tool) might be more appropriate then Twixtor.
I imagine these are digital SLR cameras. Another small concern is color stability across frames. It used to be much harder in photographic film days as precautions such as using all the same stock and film processing bath.. were advised. Exposure is also an issue - I have helped in the past someone do long exposures still capture rig to timewarp with long trails, that's a completely different issue then having the smallest exposure possible (smallest amount of motion blur or depth of field to start with - specially if you have specular highlights reflections... specular highlights reflections can be undoable (well paint job) and combined with DOF effect also have diffraction properties you probably want to avoid.
Finally 180 degrees is ambitious, 90 degrees rig with 20 cameras tend to be easier to interpolate without transforming this into a morph job. I am not sure about the callibration and capacity of your camera AND what you are trying to capture but one trick if there is an actor involved is to have the first or last camera that shoots continuous instead so you can have say an actor jump with a related rotation at the start. However say your first or last camera can shoot continuous 6 FPS then your effect would need to be designed so for example the action starts real fast and speed ramps towards real slow to end in a freeze...
Thank you so much for the thorough response, I really appreciate it. Here are some answers to the things you brought up.
I will not be shooting with still cameras seeing as I have no means of triggering the shutter simultaneously. Instead I am using HDV cameras, stationary fixed to a rig, set to progressive scan mode. I will then extract each frame from each of the 20 cameras to make the sequence.
That's good advice to arrange 20 cameras around 90 degrees rather than 180. I want to get a significant axis of rotation, but quality is definitely more important.
I'm not using a green screen- I'm planning on using just a dark black background, and just light the subject very well. However, if you think that 3D tracking could be useful, I could illuminate the background so that I could successfully generate a 3D map.
There will be no props around the subject or in the background. It should be just a solid, plain, black background with the single subject in the middle. There will only be one subject performing at a time, so they will be the only layer in front of the black background. At times they will be performing rigorous actions such as jumping and twirling and dancing etc, so I figured that this would be too difficult to try to use mattes and layers with. Do you think these active movements will cause Twixtor that much trouble? Also, you mentioned the possibility of using RE:FLEX; I hadn't heard about it before, but now that I understand it, it almost seems more suitable for what I am trying to do. I just want to made a slow, smooth transition between these 20 cameras. Which method do you think I should use? Either way, is 3D tracking something that could help?
Unfortunately I have about three days to shoot this entirely, which doesn't give me much time for testing or trial & error. I'm trying to figure this out as thoroughly as possible before I go to shoot. I could send you some frames from the shoot (which will be in early April) in case you have any suggestions on how I should go about compiling my results in post production, though.
Thanks again for all your help!
Good thing that you are over black and there is not multiple layers of stuff.
The faster the action, the better it is to have more FPS (e.g. 30 FPS is better then 24...). In the end it has a lot to do with how much pixels per frame we are traveling from a point in a picture to the destination in another picture.
The workflow with RE:Flex will just be different. It's more a different UI then a completely different technology. With RE:Flex Morph you have to first place the frames at the right place in time. The auto-align button in RE:Flex is the optical flow switch (is like Twixtor Motion vectors). Each have their own advantages.
The same apply with a person jumping... The part that is usually harder for Twixtor is for example if the persons hands are extended forward towards the camera as it creates more pixels invisible in the other frames. I know it might sound strange but someone with their arms up toward the ceiling is easier (or staying on the body while one twirls) then arms extended forward towards the camera(s)...
Since it's video cameras, you have the option of first keeping many takes and just run twixtor and see the ones that work best.
For your or someone else info:
1) http://www.breezesys.com/MultiCamera/index.htm has a multi-DLSR remote control. I bought the single camera one for my Canon and it's OK, but I never tried the Multi-Camera one.
2) are the cameras going to be in synch? If not best you shoot the max FPS you can with it. (e.g. 1/30 time offset is better then 1/24).
It might also be possible to time-aligned them in Twixtor? But something you probably would like to avoid.
I am not sure about 3D camera track, other then if you need to add CG around it. However it's possible that even if your rig is fixed that when you play back it feels like a shaking camera. So that at least if need be should be stabilized as the first thing. Although it's probably something you could test run if there is a quick way to dump a take of each camera to a computer. (maybe placing a cube somewhere in the background and see if it wiggles when you play the 20 frames).
Also if you have sequences for each view, it might becomes handy as well (gives you the latitude to have some motion, switch to live-action in-between and continue not just totally frozen time). For example a nice cheat could be that part of the take works fine until some point and then the rest works good from half a second later so you use a view slow-mo to the right speed with Twixtor to switch?
Thanks so much for all your help and suggestions. This is really good information. I think I will end up using a combination of both.
One last question:
Do you have any recommendation for a general degree / distance between each camera for twixtor or re:flex to successfully slow down the sequence (maybe to 20%)?
Too many variables to provide exact numbers
Would suggest 5 degrees increments for cameras on an arc (pointing at the same location) shooting someone that is completely in view in terms of framing.