Looking for a little sideways insight on how they might have produced the Intel Museum of Me online "Experience"
(this generic example, seems to have been cut down to create transitions but you get the idea)
Basically you accept the addon for Facebook & it harvests selected images & text from your account to populate the video template. Methods for harvesting the images & text are available but I'm looking at workflow behind the inclusion of tracked moving images as a composite onto the video.
Other customisable web video campaigns I've seen in disperse static shots with your images placed/ cropped to fit the desired locations, which is possible to do entirely within Flash. However The museum of me is basically one long constant tracking shot.
The video looks cg so I assume a perfect 3d camera move with image placeholders in 3d space, is available to be exported to something like after effects or another 3d program engine.
After this I'm thinking of two possible routes:
Video route - Since the quality of the composite is high, I suspect that the new images are prerendered into a new video layer with an alpha mask, at the "loading" sequence at the beginning. Could this be leveraged with after effects - a watch folder waiting for a job to arrive (submitted by scripting after the images, videos & text files are harvested from facebook, renamed & dumped into a folder which the after effects template file reads upon render). Even if its possible to script after effects to read these external files, the fact that it requires prerendering has implications.
e.g. Does this mean that this should really be a "client-side" rendering requirement? If its "server-side" you could have processing issue as "thousands" of users potentially trigger their experiences at once. Can after effects be run as a client-side rendering app or is this now sounding like its in the realm of a bespoke web rendering engine?
Live 3d route - Exporting the camera move to something like Unity 3d, then exporting to Flash (or doing it in Away 3d) to use Stage 3d. The images & text would be dynamically loaded & played back in flash with the camera move, and the CG "background" video is layered over the top with an alpha channel to allow the dynamic content to show through.