I have a question regarding how this audio react Native Instruments Razor promo effect was acheived.
I have watched the tutorial showing close to the effect here. http://www.creativedojo.net/razor-soundwaves/
It is good, but does not achieve the same level of 'accuracy' as the original. The displacement is occuring after the sound as well as the front.
I asked errorsmith, the product designer, "we recorded the amplitude and frequency changes of all 320 sinus generators of a razor voice. Then this data recording got visualized in after effects."
I believe he used Trapcode soundkeys or max msp script to capture this data used for the Razor plugin visualizer but the big question is how can you use this amplitude and frequency to drive INDIVIDUAL strings In Form?
It seems there are 2 basic requirements;
1) Map audio amplitude change to z axis displacement amount. - already achieved within the audio react settings
2) Map the frequency range of different sounds across the string particles in y-axis.
I am not convinced it is possible by tweaking the audio react and displacement settings alone. loved to be proved wrong.
Even playing an isolated sound source such as a repetitive hi-hat or kick drum, i have dialled in the relevant frequency and played around (ALOT) with width and threshold but cannot get the level of accuracy in the decay of the string movement that accurately reflects the shape and decay of the sound.
hope some of this makes sense. any suggestions would be fantastic. I am a music producer so wanted to prepare eq'd sounds to drive individual strings for music video animation.