Precision Time Protocol (PTP) is the quiet foundation of modern SMPTE ST 2110 facilities. When it’s working well, no one notices. When it isn’t, the symptoms show up everywhere else – dropped frames, audio drift, lip-sync issues, or streams that simply refuse to play. 

In our recent webinar, “Taming the Timing Beast: PTP Mismatch & Smarter Sync Strategies,” Leader’s Kevin Salvidge and Steve Holmes explored why timing problems in hybrid and IP facilities are rarely caused by simple misconfiguration. Instead, most failures arise from real-world operational behaviour: network dynamics, clock performance, reference strategy and critically, visibility. 

This article distils the key themes and lessons into a practical guide for anyone designing, deploying, or operating ST 2110 systems. 

The real causes of PTP problems 

A recurring message throughout the webinar was this: most PTP failures are operational, not theoretical. 

Even correctly configured systems can suffer from: 

  • Grandmaster instability 
  • Network path delay variation and asymmetry 
  • PTP packet loss 
  • Boundary or transparent clock misbehaviour 
  • Limitations in follower clock oscillators 
  • PTP profile mismatches 
  • Insufficient monitoring and alerting 

In many cases, everything appears “locked”, yet the system is still wrong. Without proper measurement and monitoring, timing issues often remain invisible until they cause on-air impact. 

Why PTP is foundational to ST 2110 

In an ST 2110 environment, PTP provides the common time reference that allows video, audio, and ancillary streams to remain aligned across the network. 

A typical architecture looks like: 

Grandmaster → Boundary / Transparent Clocks → End Devices 

PTP enables deterministic, frame-accurate and sample-accurate media transport. Without it, ST 2110 streams drift relative to one another, buffers behave unpredictably, and synchronisation breaks down. 

PTP should be treated as core infrastructure, not just another network service. 

Measuring timing: Beyond “Is it locked?” 

One of the most important operational lessons is that lock status alone is meaningless. You never measure a clock in isolation. All timing measurements are relative. The real question is not “is this clock locked?”, but: 

  • What is it locked to? 
  • How far away is it from the reference? 
  • How stable is that relationship over time? 

Phase comparison is the universal method 

Whether comparing PTP, LTC, word clock, or video reference, the measurement always reduces to phase comparison over time. This reveals: 

  • Phase offset 
  • Frequency drift 
  • Time error 
  • Wander or jitter (depending on timescale) 

Best practice for PTP measurement 

In ST 2110 systems, the most reliable method is to compare 1 PPS outputs from PTP-locked devices. Measuring PPS-to-PPS alignment exposes real timing behaviour, including network asymmetry and grandmaster quality, far more effectively than checking lock indicators or logs. 

GPS as a reference: Useful, but not foolproof 

GPS is widely used as a reference for PTP, particularly in mobile units and OB trucks, and it can work extremely well. But only if it’s engineered properly. 

GPS is reliable when: 

  • Antenna placement is robust 
  • Holdover performance is well understood 
  • BMCA behaviour is controlled 
  • Time error is continuously monitored 

GPS becomes dangerous when: 

  • Lock is treated as binary truth 
  • Holdover is assumed rather than tested 
  • Systems automatically re-elect clocks without oversight 
  • Operators only see “Locked / Unlocked” 

In mobile environments, GPS should be treated like mains power – essential, expected to fail occasionally, and risky if failure isn’t planned for. Pairing GPS with high-quality holdover oscillators and proper monitoring is important. 

ST 2110 over WAN: Timing still matters 

ST 2110 can be transported over WAN links, but its timing model assumes near-real-time delivery. While fixed propagation delay can be compensated for, variable delay and asymmetry introduce real risk. 

If buffering limits are exceeded or RTP timestamps fall outside the receiver’s acceptable timing window, streams may be rejected unless latency is explicitly managed or timestamps are regenerated. 

The key takeaway is that it’s not raw delay that breaks ST 2110. It’s poor PTP alignment at the endpoints. 

Why PTP often fails first in remote production 

In remote production workflows, PTP is frequently the first system to fail, even when video and audio appear healthy. That’s because PTP was designed for low-latency, symmetric LANs, while remote production often relies on: 

  • Carrier-managed networks 
  • Asymmetric paths 
  • Variable latency 

Common challenges include path asymmetry, GPS dependency without sufficient holdover, BMCA instability across sites, and a lack of meaningful time-error monitoring. Many failures manifest as “locked but wrong”, making them particularly difficult to detect without the right tools. 

Time error: What’s acceptable? 

While standards don’t mandate a single numeric limit, real-world broadcast practice is clear: 

  • < 1 µs time error is the practical target for ST 2110 video and audio 
  • ~10 µs is often treated as a warning threshold 
  • 50 µs typically indicates serious synchronisation problems 

Anything approaching half a video frame is far too loose for professional ST 2110 operation. 

Visibility is the difference between stable and fragile systems 

One of the strongest conclusions from the webinar was that PTP problems rarely announce themselves clearly. Without tools that show time error, GM identity, priority values, and long-term trends, systems can remain in a degraded state for extended periods. 

Effective commissioning and continuous monitoring are not optional extras, but essential to stable IP media operation. 

Final Thought 

PTP doesn’t usually fail loudly. It fails quietly, gradually, and often invisibly until the consequences are impossible to ignore. 

Taming the timing beast isn’t about chasing perfect configuration. It’s about understanding real-world behaviour, engineering for failure, and maintaining visibility into the one system everything else depends on.