Imagine your scene is a room wide with floor visible. The floor has a rug with a stain. Plenty of tracking dots come up for tracking the floor and the rug.
The scene starts from a lock off then pans.
My idea is to take frame 1 and send the frame to Photoshop to repaint the rug.
Exported the new rug with an alpha channel and it looks perfect.
I select the dots of the rug region and it gives me the target area no problem. For the hell of it (this could be my problem) I set the ground plane and create Nul and Camera.
Turn on 3D for Rug patch layer, and it is scaled down to tiny size and has been flipped and skewed in odd 3D space. I can get it to roughly match by scaling the rug layer up by 3000% and jacking around with orientation and I eventually get close.
Given this same type of scene patch, what workflow works better that the perfectly logical method I tried?
Basically I dont want it to scale the patch layer (target layer) in z space...
Track the camera.
Make a solid the size and inclination of your rug.
Then project the psd-file onto the rug-solid. Do this via a light at the same position of the camera for the first frame.
As the light and the solid remain fixed, and the camera makes a 3D-move, the retouched frame should remain correct.