DTS label generation

Hi community

Tell me what key should be set in Casparq when generating an output udp stream using FFmpeg to get it in a DTS stream (Decode Timestamp f) ? I attached 2 screenshots from the DVBinspector analyzer, on the first one there are no tags, this is a stream with Casparq version 2.3.2, the second one is done simply by FFmpeg with the -с copy switch

…I have no idea what that is :slight_smile:

1 Like

DTS is the standard abbreviation for Decode Time Stamp.

The DTS is helper data intended to assist the decoder in managing data flow through it’s buffers in minimum latency low-resource terminals. The DTS tells the decoder the time (measured in PCR 33-bit counter values) by which it should have completed decoding the associated reference frame into a pixel buffer. The decoded frame is then available when a B coded picture needs it (to meet the PTS timing for display). It is possible to decode an H264 stream without DTS, but the decode latency is usually increased. In H.262 (MPEG-2) the B frames do not get stored as reference frames, but in H.264 B coded content can be used as references for future B coded frames.

Why do you think your stream needs a DTS in the PES header? Does the stream decode without the DTS? What are the encode properties for the CasparCG, for example is it generating AVC Intra as used for DPP delivery of content? Intra only content does not require DTS.


Here is an example of my config:

            <args>-format mpegts -codec:v h264_nvenc -filter:v format=pix_fmts=yuv420p,fps=50,interlace -flags:v +ilme+ildct -mpegts_flags +system_b -rc:v cbr_ld_hq -muxrate 9.0M -max_delay 2M -b:v 8M -minrate:v 8M -maxrate:v 8M -bufsize:v 27M -g:v 24 -codec:a mp2 -b:a 192k -filter:a pan=stereo|c0=c0|c1=c1 </args>

When watching a stream, for example, in VLC, after about 20 minutes, I observe video and audio out of sync

On a number of equipment that generates a DVB-C signal, it was also possible to see out of sync without the presence of such marks.

Maybe it’s in the old ffmpeg library?

I tried to test stream remap with Casparq by simply copying the codecs to 5 versions of FFmpeg and it made them by default

Your coding configuration looks appropriate. There should be no requirement to specifically enable DTS. If there are reference pictures that have specific decode time point requirements the encode process should include them. Your transport stream rate looks “safe” relative to the the elementary stream rates. I’m not sure where ffmpeg measures the bit rates - before the encapsulation into NAL (Network Abstraction Layer), Packetised Elementary stream packets, and transport packets or if the various layer headers are accomodated within the stream rate for the elementary stream. I have worked with coders one using the bit rate pre-packetising and one post-packetising. Creating a “safe” mux containing streams from both coder types is best described as “challenging”.

The standard code base for server 2.3.2/2.3.3 (bootup screen value and Changelog value may not match!) is now a few years old, and the ffmpeg apps do have some minor pruning to include only relevant elements for CasparCG operations so you may be correct about the old library. You can run the versions included in the CasparCG folder to list their properties. But PTS and DTS have been part of the transport stream specification from the initial release of ISO/IEC 13818 volume 1 (MPEG-2 Transport stream and program stream spec), so I would expect them to have been embedded in ffmpeg for many years.

I can see you are offloading some of the compute to the dedicated hardware in the graphics card, and this should help CasparCG concentrate on the AV playback process. I assume the CasparCG server log does not list any dropped frames? A CasparCG dropped frame could be a cause of lip sync changes.

Loss of lip-sync is always difficult to trace. All decoders have to use the regenerated local 27 MHz clock created from the embedded PCR values plus the PTS values in the PES headers to start the replay of synchronised media using the PTS in the elementary streams for video, audio and (when present) subtitles.

Once the replay is started the decoders tend to assume that they can output the streams using just time counts taken from the regenerated clock, and they do not monitor the clock/PTS operations. So if a minor glitch occurs, such as the loss of an ethernet packet or a dropped audio or video source content, the loss of lip-sync is not corrected.

Noting a second question you posted in the forum, I assume you are running CasparCG server on a Linux platform. If you are using Windows it is possible to use NDI as a low latency mezzanine compression to pipe the output of Caspar to a separate H264 compression system that is optomised for its compression systems support. Unfortunately NewTek do not offer NDI for Linux. The compression coder can be run on the same PC as the CasparCG server, using localhost to pass the stream, or on another computer on your local network. When you see a lip sync loss on the decoded H264 stream you can use an NDI monitor tool to check if the source to the encoder is still in sync.

They do, and CasparCG supports it since 2.3. You are probably thinking of the old iVGA consumer, that was windows only and newtek released a NDI compatibility library for.

CasparCG should probably have its version of ffmpeg update before the 2.4 release, hopefully I will have the time to do that

Thank you for a very comprehensive answer.

Yes, the missed frames in the logs are not visible at first glance (I will double-check in more detail)

One more question, could you tell me if I can and how to connect the latest FFmpeg library to this version of Сaspar, describing the process as detailed as possible


You are absolutely right that NDi is supported by this version of Caspar, we want to conduct tests when we find the most suitable encoder for this

@Julusian, thanks for spotting my error on NDI support. I should have spotted the API support on the Newtek site, but tripped over the lack of NDI tools (monitor, test signal generator etc.).

Glad to hear you may have found the source of the lip-sync issue developing. Finding the underlying reason for any dropped frames is likely to be rather “challenging”. I have had some playbacks that say there is a dopped frame, but looking at recorded output via the decklink to a Blackmagic Hyperdeck I am often unable to locate any temporal discontinuity.

I have no experience of building the server executable from the distribution code base, so I’m not able to help with the process of updating ffmpeg. Julusian implements many such updates, and he may be able to direct us all towards the processes required.

Yeah, thats true. It is pretty much bare minimum level of support, lacking all the useful tools that they have for macos and windows…

I am currently looking at making the main builds use ffmpeg 5, and writing down the process to make future updates easier. Updating ffmpeg has been neglected as the current version works fine for the main sponsors.

The latest build is now using ffmpeg 5.1


Thank you Julusian,
Is this build SRT enabled ?
and what is the CEF version on latest build, I’m using your build of CEF 95.

I have not checked what the ffmpeg builds have enabled.

That version sounds about right. There are some known issues with the current version, so at NRK we are currently using latest casparcg with the previous version of CEF. Fixing these bugs and updating it further is being looked into, but there is no time frame on when that will be done

1 Like
Privacy Policy   Terms of Service