How to not use system default audio device?

Hey there. I need some help!

I have used multiple of the 2.3.x server variations, and they all seem to be have similarly.

I have a windows 10 computer with an nvidia p620 4 displayport output video card, with casparcg driving 2 of those, one for alpha, the other for fill.

I would like the audio to only and always be sent via the screen consumer that’s driving the fill output. Not the ‘system default’ audio device.

The behavior I currently see is that when I start casparcg server, it seems to use whichever output device is the default, even when I use the windows 10 ‘settings’ to store a specific audio-out device for the casparcg executable.

So…

how is this done? what magical configuration incantation must I whisper to achieve the desired outcome?

I have searched high and low, but have not seemed to stumble upon the needed information thus far.

In some cases, I have seen [true|false] and in other places,
but with other xml elements such as , and I’ve tried those, but seemingly, not to the result I am looking for.

So, helpful information is appreciated!

Took a quick look at the OAL code. It only connects to the default audio device. If you can handle modifying the code and build CCG yourself then you could hardcode the name of the device in alcOpenDevice here. You might be able to get the names by running ffmpeg (ffmpeg -list_devices true -f openal -i dummy out.ogg) or something that uses OpenAL also.

Now here are some pointers if anyone wants to make this configurable and supply a PR:

  1. Available devices and their names need to enumerable - either by listing them on server startup or by an AMCP command, possibly named “INFO SYSTEM-AUDIO”.
  2. Support for configuration setting like <system-audio><device>DEVICENAME</device></system-audio> needs to be added.
  3. Perhaps adding support for multiple audio outputs via AMCP ADD (e.g. ADD 1 AUDIO [devicename])

Keep in mind that most of us don’t use system-audio so there might be little incentive for anyone to implement changes like this.

Thank you for the response! It is very ‘illuminating’, and surprising that this is not already possible.

I will take a look at the build instructions and see where that leads.

Thank you very much for the pointers!

Best Regards :slight_smile:

Success (sort of).

I have added code that manages to:

  • Interrogates openAL device list (and output device names to stdout during startup)
  • look for and use \<system-audio>\<device-name> in the consumer config xml to allow user to set the device name
  • when a matching name is found between what’s available and what’s expected, to use that device for audio playout

So, that works nicely. I have tried starting / restarting while the system audio device is set to a different device, and the audio continues to come from the configured device. Yay!

BUT, the build I have (per the instructions, as far as I can see), does not seem to also build the locales, and, even when I copy over the locale data from other instances of server, I still see the following error:

[0220/110608.670:ERROR:gpu_process_transport_factory.cc(990)] Lost UI shared context.

and later:

[0220/110623.773:ERROR:viz_main_impl.cc(184)] Exiting GPU process due to errors during initialization

So, what’s happening here?

Which steps/things am I missing?

This error does not seem to affect playout of video, but, it does prevent me from using HTML templates to do any character generation! (which I need, of course).

Help is appreciated!

Ok, well, some digging, and I did find that by adding:

<html>
    <enable-gpu>true</enable-gpu>
</html>

to the configuration, that I can again render html CG.

but, what about the generation of the locale data??

seems like something is missing from the build instructions, or, perhaps, there’s some optional thing that needs to be enabled somewhere?

still would like some help please!

thank you :slight_smile:

One workaround would be to use a audio-deembedder
You could use BMD SDI to TRS or a more modern approce
This can deembedd from NDI to Dante, ASIO or Reaper

Hmmmm. Yeah, perhaps so, but, it seems a bit like the cure might be worse than the symptom, because keeping the audio in sync with the video would be a major headache, I think.

I’m wondering if running 2 separate instances of casparcg would be workable, each having only 1 channel, and each having it’s own audio device, might be workable. computer has 4 hdmi video outputs. ie: instance 1 using video a+b, using a also for audio, while instance 2 is using video c+d, using audio on c. CasparCG client seems to be able to handle multiple CasparCG instances, as long as they all have separate IP address to bind to, would probably be ok. Windows hosts can have multiple IP addresses, so try binding each instance to a specific IP address. Probably simpler than using the same IP address and keeping the ports separately assigned.?

It is sufficent, if you use different ports. You can set the port in the congig. Use separate config files for each instance. You can start Caspar from a comand line adding the name of the config file as a parameter. I normally use a simple batch file for that.

Interestingly, while I can run multiple CasparCG servers on the same host, the CasparCG Client will not allow me to add the same IP address more than once, even if the ports are different.

bummer.

Privacy Policy   Terms of Service