As hard as I try, I have a difficult time adjusting levels in a movie to get the relative levels correct. I just had an idea on how to get close in an analytic way.
Audition has the ability to normalize an audio selection. During this process, it allows you to specify a normalization valuse like -10 dB. I have done some experiments and found:
1) Normalizing to 0dB creates a signal whose max amplitude will be given the largest digital number. For 8 bit sound that would be 2^8/2-1 or 127. Other numbers will be interpolated in the log domain.
2) Asking to normalize with an offset of -10 dB will create a digital representation whose resulting pressure will be 10 dB lower than max.
Is it therefore possible to make a first pass at adjusting the audio levels of a movie sound track by:
a) Finding the loudest sound and normalizing it to 0.
b) Find the SPL dB value for this sound in a table. ie. Thunder is as +110 dB.
c) For all other sounds find its value in the SPL dB table. ie. Conversation is at 65 dB.
d) Take the difference between (max)110 and 65 (45) and normalize this sound down 45 dB.
Granted, it is the rare audio system that can simulate thunder down to a whisper. But I should be able to find the loudest sound (say a truck driving by at 75dB) and adjust my sounds from there?
FLL Freak Productions
The normalize feature is good but you may think about using the hard limit feature instead ... Hard limiting will help with dynamics, limit your peaks, and bring out the stuff that gets lost below 15 or 20db ... I personally like to hard limit things to -3db for video ... The industry standard used to be -6db for audio, but now all audio seems to be hard limited to as hot as they can get it which is right at -.1db ... Hope this helps .......... WILLIE
I will look into this soon.
Talking off the top of my head, I would think that a pure limiter would introduce lots of distortion.
Looking back at my first post I think I was not very clear. When I record the audio track for a movie, it is done in sections. I record each sound effect by itself, the dialog by itself, and the music by itself. During each recording, the physical setup, the microphone, and the gains may all be different. I elect to adjust the gains to get the best dynamic range. But this means the whisper is close to 0 dB as well as the thunder bolt. Normaly I compensate by playing with the level (volume) control inside Premiere. Trying to get this right by ear is what is hard.
Trying to normalize each section based on SPL tables seems like a way to at least get "in the ballpark". So now my thunder clap stays at 0 dB and the whisper I normalize down by 60+ dB. Of course the dynamic range of the whisper is much less, but it has to be have a lower volume that the clap.
I hope I made myself clearer this time! And thanks for the reply!
FLL Freak Productions
First I can see (from what you have said) that you are working way too hard, you would be better off by doing your audio editing in Audition ... P Pro is a great program for video but Adobe purchased Audition (formally Cool Edit Pro) for a reason ... What do you record your audio on? If you would like I would be more than happy to see if I can help you out with a problem clip, once I hear what you are trying to accomplish I may better understand and get you headed in a direction that will cut your time and give you just awesome sounding audio ... If you are so inclined E-mail me a minute or so of raw audio in an MP 3 format, to firstname.lastname@example.org ............. WILLIE
Working too hard is the story of my life...!
You can see two examples of my work at http://www.fll-freak.com
(Scroll down a bit till you see the Dr. Justin Case Movie information)
These movies are all done with stop motion animation (claymation) and LEGO bricks. The movies explain the rules of an international competition to the audiences.
Till recently, I had been using Audacity. Now I have the whole Adobe Video Collection and will be using Audition. This year I have also graduated to a good condensor mic and a Edirol UA-25 USB digitizer. The UA-25 is a great product for its price.
The conversation track is recorded in one session with the actors. The bad sections are cut out and the whole track normalized to 0 dB. This is then split up into individual sentences to be added to the video in Premiere. Sound effects are recorded one at a time and individualy normalized to 0 dB. The music track is also normalized to 0dB and pasted in as an audio track in Premiere.
Now comes the hard part that my partner has been doing. He needs to set the volume level for the conversation at something less than 100% to give head room for sounds that are louder than the conversation. He also needs to set the volume level less for quiet sounds.He has done a great job but it has all been done by ear.
In the "Nina in No Limits" movie, how loud should the background city sound be in relationship to her footfalls, the car's driving by, and the door opening? I understand that this is very subjective. That the audio track is one big place that a movie director can make or break a film. But I am (was?) looking for a way to get into the ballpark and from there tweak for effect. Perhaps our current "adjust by ear" method is the best way.
This all came about because I have been present at tournaments where they have played my movie and the master volume was not set right. I was going to create a short animation for the audio engineer to use in advance to set the master volume. A minifig would walk out and use his normal voice, his library voice, than fire of a gun. But the relative level of these three sounds need to be the same as the movie. So the idea came up to adjust levels analyticaly rather than by ear.
Thanks for all the help you give to us poor fools.
FLL Freak Productions
And I thought I was dazed and touched, I give, you win!!! The first problem I see is that you normalize everything to 0db ... In the digital audio world this is one of the supreme no, no's ... I would normalize to -3db or less ... When recording I stay between -6 and
-3 ... Now here is where you can excel with Audition ... After you get the video portion done, (which must take three quarters of a lifetime) I would import it into Audition in an AVI format, you will be able to view the video and record your audio and add sound effects in real time right in Audition, which I think could save you a ton of time ... Now work with the audio as if it were a multi track session ... This way you can have total control, (better than Outer Limits) ... You will be able to adjust volumes, do pans and totally open the stereo field to really add so much more to your production ... EQ is a very important factor as well ... After you get it like you want it right click on the AVI file and you can remix your finished audio and the AVI file together ... You will have to save it as a new file ... I wish I was closer to you it would only take a couple of hours to get you up to speed ... I feel that you can improve your production and cut a bunch of time ... Let me know if I can help in any way ... WILLIE
I will give your ideas a go over the next few weeks. I am working on that short animation for master volume setup. I can try out your workflow to see if it improves my sorry life. Again thanks for the help.
FLL Freak Productions
Let me know if you run into any problems ... Glad I could help ... WILLIE
I had some time to think these posts over. Your post on a Premiere - Audition work flow was great but has a few holes/questions.
In the world of stop action animation, the dialog audio track is normaly recorded before the first picture is taken. We use the dialog track to plan each and every picture. Analyzing the dialog allows us to minimize the number of frames that have to be taken.
Sometimes it is done the other way as a voice over. This requires very talented actors that can adjust their timming to a frame or two.
So your workflow might be changed as follows.
1) Record the dialog with the actors and make sure the original recording is not clipped. The UA-25 has a dynamic limiter than can save one's posterior.
2) Cut the crud out of the track and normalize to -3dB.
3) Make whatever EQ adjustments are needed for the whole track. Boost base, remove hum, ...
4) Split into sentences.
5) As the animations become available, use Audition to glue the audio and video together.
6) Make individual EQ adjustments if needed for that scene.
7) Export the clips to be used in Premiere.
8) Use Audition to clean up and split up the sound effects and music.
9) Paste the sound effects and music into the Premiere project on seperate audio tracks.
Now the question becomes, should the volume levels be adjusted in Premiere or back in Audition?
It would seem that the music track that spans multiple scenes would be best left to Premiere. But should the individual scenes be adjusted by going back to Audition, or can we just stay in Premiere?
FLL Freak Productions
OK, you are on the right track ... One note, if you are getting hum in your system there is something wrong, your audio should be clean, check volumes and make sure you are recording between -6 and -3db ... Also when recording the actors try and get them away from any outside noise and always use a wind screen in front of the mic ... I am sure you know this stuff I just like to hear my fingers hit the keyboard ... I would use Audition to do all audio with the exception of fades ... If you put the audio track together properly in Audition it will eliminate the need for multiple audio tracks in premier ... The bottom line is, do as much as you feel comfortable doing in Audition ... Read about hard limiting in the help file, and remember that EQ will make or break you ... I like to use the parametric EQ in Audition, it's just awesome ... I just looked your unit up and it looks like it will do for your uses, the limiting feature is very important when it comes to voice recording because of dynamic range ... If I can help in any way let me know ...