There is something going on with the HTML producer. It gets really slow after some time playing out templates over and over on my development PC.
It happens (or gets worse) when:
Adding and removing templates (for intro, next and outro animations testing and tweaking, usually many times in a minute)
CEF is in GPU mode
The template has some console.log() lines
The log level is debug
The diagnostics window is shown
I noticed that the GPU memory usage is going up on one of caspar’s processes but it’s not freed after I remove and add the template. Eventually it takes over >100% of dedicated VRAM and becomes laggy, all the system stutters.
I understand there is not yet a way to remove the producer from the template but thought the server itself would replace the producer when adding a new template. That is not the case apparently.
Also, why the producer shows late frames when idle? The channel format is 1080p50 with a single NDI consumer, fields are disabled (shouldn’t really matter as the channel is progressive though).
CasparCG server 2.3 builds 715449df and 7829b13b show this issue. Build 4a8f03f3 does clean the VRAM but it’s unusable, CEF is skipping frames like crazy.
PC is i7-950 (x58 chipset /12GB RAM)
GPU is GTX960 2GB
OS is Windows 10 18363
I suspect that logging can be blocking somehow. DIAG is a resource hog. Any specific reason to run in GPU mode? I stay away from it as it’s under performing (for me at least).
You can think of running a template as opening a new window. Clearing the layer should destroy the window. The main CEF process continutes to run after you remove any templates.
Late or dropped? CCG does count dropped frames when a template is idle (but it’s just CEF not sending a new frame since nothing has changed).
Performance is far better on GPU, my test CPU is very slow and the production one is fast but is usually in use by other programs. Gradients and filters perform smoothly on GPU but not on CPU.
Another very good reason is antialiasing. Tested on both modes and couldn’t enable smoothing on NDI output without GPU enabled.
Here is a comparison I made testing out NDI antialiasing and color subsampling:
This is true but it doesn’t solve the VRAM leak. The performance is far better on higher resolutions (and specific applications like big SVGs or Canvases) with GPU enabled and Nvidia settings tweaked. If you handle just a few templates over the server’s lifetime or you only run a single template for a long time this won’t be an issue, but if you have to constantly add and remove templates this is not an option at all.
These settings work great for me on a GTX 960 (and on Quadro P2000 too):
In our case I’m afraid it’s the latter, so we will have to continue without GPU until the leak is solved.
For a second I thought that the GPU enabled caused dropped frames on video playout too after some hours of playing several clips but that problem seems to come from somewhere else. I’ll try going down with the log level next…