CCG 2.2.0 HTML5 performance and live video

This more then likely was in the previous forum. Let me explain: we’re looking to build a large videowall. We’ve done so in the past. Back then I used a Chrome browser, running in kiosk-mode, across 8 screens, each in 1080x1920 (portrait). We setup Windows extended desktop this way, and this has been running in our TV studio for 5 years, 24/7. The outcome and flexibility has been awesome. I tried CCG (2.07 in HTML5) back then, but performance was sluggish.

Now we want to take this to the next level. We need a live video source in our wall. It will consist of 3 landscape 1920x1080 screens. One idea to do it: run each screen with a separate PC. Run Chrome on 2 of them. On the 3rd, with live video, run NDI Studio Monitor full screen. Use one NDI source for the live video. Run a CCG via NDI with alpha and in Studio Monitor, overlay it. Make sure your 3 parts run in sync with Node.JS. This could work (we sync our graphics now already using Node.JS). Al separate elements work. You only limit yourselves by only having live video on 1 of these screens. And not being able to position it freely spanning more then 1 screen.

Other idea would be: run these 3 screens from 1 PC, using Windows Extended desktop like we did already. Now only 3 screens, not 8. Bring in live video (using SDI over a BMD card works, NDI would be nicer). Overlay graphics. All inside HTML5 inside CCG.

Any idea performance wise with this? CEF32 vs 63 now in 2.2.0?

I also came across this GPU (http://www.advoli.com/). They use AMD GPU’s, and extend displays over HDBase-T. Very nice concept. But how does CCG 2.2.0 (HTML5 with live video) perform on AMD GPU? This card would allow me to output 3x 1920x1080 over a distance easily. But how would CCG 2.2.0 HTML5 run on it?

I know: lots of questions. And I realise, I’ll probably end up testing it all myself. But there’s a lot of knowledge here, and I hope some of you might have some answers …

I recently did a similar project using CasparCG with Decklink Quad 2 cards to output split screens in SDI. I do it by adding two channels without an output hardware attached. On the first I play the output, different layers with triggers coming from the studio automation. On the second channel I route the channel down to a single layer. Then I route this layer to (currently 6) SDI output channels and upscale them using a transform command. That gives not the full resolution on the screens, but the cameras have also only HD resolution, so, that no one notice.

In that studio they have a single big screen, were they do the duplex (live talk to an external video feed). To make it easier for the sound (latency of life input to Caspar) we feed the graphics backgrounds to that screen to an input on the mixers second ME and add the life feed there. So we have no delay, when we switch to full screen in the program out. If you do it via SDI it is much easier to transport the feeds to the studio screens and you can always send one or more feeds via the vision mixer. See their program here.

I would keep the infrastructure as you don’t seem to be unhappy with it at all and instead look into bridging NDI sources to WebRTC. There are some commercial SDK’s, but honestly I would look into gstreamer since that has plugins for both NDI and WebRTC.

The challenge here would be synchronised playback of a WebRTC source, but if that doesn’t work out you end up with the same limitations as you already stated for using NDI Studio Monitor but with the benefit of keeping the infra you already have.

AMD performance is a mixed bag. Some people report problems, others don’t. Might be use case specific.