LHe analog audio level
Moving to a new LHe system, Final Cut Pro 6.0, AJA drivers 4.0, OSX 10.4.9 clean install
I operate in a primarily analog environment, BVW-70 BetaSP which expects +4dBm = 1.228 Vrms = 0VU as reference tone in & out. I deal only in Broadcast programming with a standard of -6dB through -12dB headroom for peaks, hence the FCP default (which can not be permanently modified) of -12dB for reference tone has served me well.
To get +4dBm out of the LHe with FCP tone at -12dB, I figured out that I can adequately control output by deselecting "Lock Input Audio Gain" in the AJA Control Panel and then dropping the Output volume of the Kona from the default 100% to 40% in System Preferences:Sound. The Master Gain adjustment in Log & Capture then offers a range of -96dB to -20dB for incoming audio (+4dBm at the XLR).
So far AJA support will only state that -20dB (in FCP) In and Out corresponds to +4dBm. I did confirm this with testing. I had hoped AJA would allow for a wider range of analog audio input levels. If I capture a clip with the default (and maximum) of -20dB referenced to +4dBm tone, the result is a relatively small amplitude waveform with a huge amount of headroom.
I'd really like to be able to raise the incoming level (+4dBm analog professional industry standard) to be equal to FCPs default -12 dBFS without external amplification. I am surprised no one else has asked for this capability.
Hi Dave -
the broadcast standard (and pro audio standard for analog audio) is +4dBu (we can say +4dBm if you like, to reference to 600 ohms). This is calibrated by products from Sony, etc. for their VTR's to -20dB Full Scale. As your tests have shown, all of this is accurate from the AJA product. I have NO F#$%ING idea what on earth Apple is doing with -12dB, and what their stupid meters correspond to, but to me, they mean ABSOLUTELY NOTHING. All those meters indicate to me is that you are getting some sort of level, in and out of the FCP application. You can use meters on your analog mixer (Mackie shows +4 at 0VU, and Behringer varies from model to model with 0VU or +4VU in the yellow LED's). These crappy mixers have more accurate metering than FCP. IF you want to get fancy schmancy, and just don't want to rely on your VTR meters, you can get a nice set of Dorrough loudness meters (and there are others like Coleman), that will show you a REAL ACCURATE LEVEL of what you are getting, and this will correspond to the accurate levels coming out of the AJA, and your Sony VTR. There are stand alone software applications from companies like Hamlet that will show audio levels too. But PLEASE PLEASE don't start bitching about not being able to calibrate to FCP's audio meters.
Your issues remind me of the early days of AVID, and the pro audio guys, when all of these standards started to come out, and everyone wanted zero to just be zero. But when the audio guys were pushing the headroom on the systems, AVID's -14dB (these days -20), Sony's -20, and Panasonic at -18 (like the original SV3700 DAT machines) would drive everyone crazy (why am I getting clipping !).
The AJA product itself has PLENTY of headroom. I have no idea who the idiot was that picked -12 for FCP.
Thanks for your response.
I have no intention of using the FCP meters to set volume levels.
I agree their best use is just to indicate the presence of audio. My main comment was;
If I capture a clip with the default (and maximum) of -20dB referenced to +4dBm tone, the result is a relatively small amplitude waveform with a huge amount of headroom.
I'd really like to be able to raise the incoming level to be equal to FCPs default -12 dBFS without external amplification.
Also, I refer to +4dBm as the current standard since the industry no longer requires a 600 ohm termination in inputs.
So, why couldn't AJA rewrite the drivers to allow a wider range of input levels (even as the analog world fades away)?
Sony has defined the -20dBFS standard, not AJA, or Apple. Sony is a very powerful company. -20dBFS give you tone at +4dBu (voltage units, not in milliwatts, as relates to the 600 ohm transformer's impediance). 20dB gives you PLENTY of headroom before clipping, 8dB's more than -12. Remember, in the world of digital, there is no hiss, so you don't need to "peak" the levels to get enough signal on there for a good S/N ratio. Modern A/D and D/A converters to not need to be "peaked" to get as many bits used as possible for good resolution of the audio signal. This is 2007, not 1992. You won't hear audio sampling noise, if your levels are low. The bottom line here, is that when you average around -20dBFS, this tape, when dubbed, plays back at 0VU on standard Sony Beta VTR's, which is still the #1 standard in the US for broadcast playback.
You are correct, and I most humbly apologize for my error. Yes, dBm does refer to milliwatts into a 600 ohm impedance. I should have specified +4dBu as equaling 1.228 Vrms.
I'll stick with direct communication with AJA concerning why they only permit attenuation and won't add the ability to boost analog inputs over unity when deselecting "Lock Input Audio Gain"
Thanks for your time.