I've normally been seeing systems set up with two monitors: one on the desk for the editor, and a TV type for the client. Each has its own speakers pointing to the intended viewer. On the desk is a mixer/switcher so the editor can switch the audio from one set of speakers to the other. I used to se the audio coming from the DB25 breakout, but now that we have all digital screens with more processing going on, the client and desk montiors are slightly out of sync picture wise, and therefore the audio is as well. So I've been looking at routing the audio to the switcher from the audio outs of each display, figuring that will be more likely to be in sync with the picture, rather than relying on one source to cover each display (or then requiring an audio delay somewhere along the way). Am I not looking at this correctly? I realize some software will let you delay the audio to make sure it is in sync, but then you have the issue of making sure that happens each time you switch the focus- routing the audio in the wiring goes a long way to making sure the editor doesn't have to fuss. Pick the speakers and source for each display and it should be (just about) in sync.
Am I thinking this through correctly?
The ideal would be if the AJA box would let me delay the signals based on the output so HDMI could be at normal, and the broadcast could be at 33ms delay (or whatever). But that would probably be a huge headache.