So I am working on music visualizations in After Effects that respond to audio input. I made a graphic in AE that I will then be using in FCP to layer over some background footage, all of which will follow the beat of the music. Using a Waveform track as the audio in the AE composition, I rendered the visualization out to an AppleProRes 4444 .mov file with the audio set to 44.1Khz 16-bit. When I imported the rendered .mov file into FCP and layered it above edited background footage (which I edited to the beat of a separate instance of the same original Waveform file) the original audio Waveform and the AE rendered audio do not match up. They start out the same exact point but slowly over the course of the song they drift by about a half a second, resulting in the visualization to be jarringly off by the end. I am wondering what is causing this?
Both instances of the audio are 44.1Khz, and the framerate of both the FCP project and the rendered AE vizualization is 23.976 per second, however the original waveform is 24-bit whereas the AE rendered audio is 16-bit.
Because the songs/visuals will be uploaded to Youtube, I don't want to compress the audio unnecessarily, but would using a different audio file format prevent the drifting from occurring?
Now I am re-editing the background footage to match the AE rendered audio, but for the future I would like to know either how to prevent this audio drift or at least understand why it is happening?
Thanks in advance! And let me know if there are any more specs you need that might help answer my questions.