Best practice / advise for adding cg to virtual set?

Hi,

My use case is as follows:
I usually do live events/IMAG for a nonprofit, but due to covid we’re trying to do a virtual convention.
I’m setting up a virtual studio, chroma keying with a set of fixed camera angles/fov (this is new to me)
I need to create virtual screens/displays to affix to e.g. front of presenter desk to send to DSK in ATEM

How would you compose a 2D cg and transform it to match the different camera angles?
My gut feeling is to compose it in a channel, and then use mixer on the entire channel by routing it to a layer in a separate channel, but I apparently hit a bug there.

I don‘t really understand what the problem is. Depndant on the amount of cameras and upstream keyers in your ATEM you can either do it entirely inside the ATEM or use a Caspar output for each camera. You could input the camera into Caspar and do the chroma key and compositing in the channel. You would need a Decklink input and output per camera, so for 3 cameeas you need 3 inputs and 3 channrls with an output. Route the outputs to the ATEM for mixing like you would without the chroma key. Be aware of the delay. You will need to delay the audio also.

It’s less a problem than a question on how to approach it.
If the same cg is viewed from different angles, I would create it once and then route to different channels to be transformed/add perspective(?)
I guess I could then either use ccg 2.1, route the entire channel and transform the routed layer in the destination channel
Or I could use ccg 2.2/2.3 and route each layer in the channel and transform them individually in the destination channel.

Am I missing something?

Why would you want to route it? is the background animated somehow? The route and perspective commands work from version 2.0.7 onwards…

It’ll likely show twitter feeds/webpage info, video play out etc.
if I didn’t route it, wouldn’t I have to transform it in the origin channel whenever the camera angle changes?

I don‘t think you get away with doing the transform ‚live‘ based on the camera angle, so in this case, with all this twitter and stuff, I would also do it with route and process every camera individualy.

Most frame accurate virtual sets are done by compositing each camera before the vision switcher, especially if there are camera moves involved.