I’ve always wondered how some apps like Watchout on Windows or Millumin on Mac use a GPU output that isn’t part of the OS interface, dedicated only to the content output.
I found out the name of the ‘feature’ on Windows 10:
Is this how watchout does it? I wonder if devs have considered implementing something similar on Caspar. I get that this is Windows only so maybe it would have to be a lower level implementation, calling the GPU directly so it works on Linux as well.
Why do I think this could be useful? Basically to use screen consumers in a more secure way, that ensures the SO won’t interfere with them at all. The usecase for this is of course videowalls with non standard resolutions.
To be fair the new custom resolution options added by @Julusian in the latest versions cover some of the needs that we would have for screens before, and now can be handled by decklink.
But there are still several cases where being contained inside a broadcast signal with unused areas can become an issue. The first example that comes to mind are signal proccesors such as Barco E2 / S3, etc. where the number of total pixels are limited, so if your playout is sending a full 4K but your content is not using the full 8,294,400 pixels, you are kind of crippling the ability of the signal proccesor to use more inputs and/or layers.
Sending only the amount of pixels needed through the GPU output and a screen consumer is the only solution for this problem if your signal proccesor and usecase cannot handle the unused overhead, but the risk of the OS overlaying something is ‘always there’ (the ammount of settings you have to tweak to minimize this risk is not trivial).
I am aware that the screen consumer cannot be expected to maintain 100% sync between video and audio but with videowalls sometimes there’s no audio, or the content is not that sensitive to the sync. I wonder how Watchout, Millumin or Resolume handle this.
Sorry about the long post, looking forward to read your thoughts!