Reduce FFmpeg producer buffer on RTMP streams

I’m working on a project which involves receiving RTMP feeds (over 4G) and playing them out through CasparCG with added graphics. I came up with a simple client that play each stream in a different channel when they become available and a poster when they go offline. All good.
Most of the time the streams get restreamed to social networks directly, but in some cases the streams would serve as a live conference with an on-field reporter so low latency is crucial. I managed to lower that to ~2s on 4G and ~1s on LAN.
That 1s delay is clearly intentional and it gets count from the first keyframe (so the final latency is 2~3s total). I tried many params in the PLAY command without any luck, not sure if I’m doing it right or simply the FFmpeg producer just works that way.
I’ve found many FFmpeg consumer argument examples all over the place but not much on the producer. All I found is this syntax:
PLAY <channel>-<layer> <resource> <parameters> ...
But it doesn’t specify quotation or any example.

Can anyone help me out with that?

1 Like

There is an old fork of CCG that was used with incoming RTMP and very low latency. It uses mplayer instead of ffmpeg and is more stable with RTMP streams.

1 Like

Does CCG 2.0.7 support iVGA?

Yes, it does.

1 Like

I’ll give it a try (if I can build that) and report the findings back here.

I think you can find links to builds in one of the wikis on GH

Following this thread :wink: I might be doing some rtmp stuff this summer and have my eyes set on using OBS for playback. I find that to be a bit more stable, but mostly related to a/v sync and not delay.

What sort of delay are we talking about when you use plain ffplay vs casparcg and what arguments do you need to add to achieve lower delay in ffplay?

I don’t think CasparCG does anything special to increase buffer on network inputs.

That is true for the delay, too… it takes at least half a second less to get the stream started. I handled the a/v on the server side, on nginx I enabled interleaving, wait for video and wait for key, and also reduced the audio sync threshold to 100ms.
For my use case I prefer a frozen stream for a while but not reduced quality or audio drift.
I think OBS is my baseline. Looking at the stream in OBS and the CCG output I can clearly see the 500~1000ms extra latency in CCG, like it’s hardcoded somehow, either waiting for a new keyframe or buffering an amount of data. OBS shows the stream right away, not very well but it stars immediately and then it gets adjusted. You can easily have a full duplex conversation with OBS’s latency, not so with CCG’s latency.
I would use OBS to receive the streams but CasparCG will let me automate the stream availability in a production environment. The plan is to have a multiview with all the streams active at all times with a failover poster image (If you’ve seen SpaceX webcasts you know what I’m looking for, camera streams are unstable and they can’t do anything about it, but they don’t look unprofessional when they drop) and also publish the final streams automatically to social media if required as an auxiliar stream.

CCG, VLC and FFplay have this latency too, even with with network-caching reduced and nocache args. I tried nocache, rtmp_buffer n at various values, -fflags -nobuffer (that shows a libavformat error so there’s that), -nocache… with quotes, without them, with hypens, double hypens. Nothing seems to do anything there.

Quick test example

CasparCG over LAN (WiFi):

OBS over LAN:

CasparCG ove LTE:

OBS over LTE:

So to summarize, the latency is:

  • OBS: 0.57s / 1.18s
  • CasparCG: 2.18s / 2.19s

I hope this helps you in any way. Thanks!