Let’s Look At Low Latency
Latency is the time it takes for a system to do something – the gap between pressing the button and the action starting.
In video, it is the time between the light hitting the lens and the image appearing on the viewer’s screen. In the days of analogue television, latency was virtually zero, and audiences appreciated that when something was live, it really was live. Since then, all the technological advances have actually increased latency.
Some of this is due to increased processing, not least the conversion from analogue to digital, and then compressing and encoding that digital stream. IP delivery, whether it is for a contribution circuit or distribution over the internet add further latency: each node in the internet is in effect a store and forward point, where data streams are buffered and stabilised before being passed on to the next.
Why latency is important
When internet streaming first became a practical reality, services were forced to accept latencies between 30 and 60 seconds, because that was all that was practical. But the aim was always to lower the latency, to achieve as close to zero as possible.
For an obvious example of why latency is significant for a service operator, consider sport, where being able to enjoy the action in the moment, and consistent latency, is vital. Imagine hearing your neighbour cheering while you have to wait half a minute or more before you see the goal? More significant, consider a betting service which gets the action and results slower than potential gamblers.
Long latencies also mean that interactivity between remote contributions is impossible, as contributors have to wait for a signal to get to them, and their reply is equally delayed. With the growing interest in live streaming for corporate communications – annual reports and general meetings as well as product launches – then this is a really significant concern.
What is considered low latency?
Today, we consider anything below 15 seconds to be low latency. The goal, though, is ultra-low latency, which is generally seen to be below one second. To achieve these goals calls for every part of the process to be fine-tuned and working in perfect co-ordination.
The first efforts towards low latency were based on the HLS streaming protocol. HLS – HTTP Live Streaming – was developed by Apple to provide low latency servers to HTML5 players. Working with RTMP – realtime messaging protocol – it brought typical streaming latencies down to around 12 to 15 seconds, firmly in the low latency band.
But that solution is a bit of a blind alley. RTMP traces its origins back to Flash, which is most certainly dead. Although RTMP continues to live as an ingest format for live streaming, the reality is that, like the use of HLS, it is limiting to operators and users alike. What is needed is a more open format that allows people to choose their platform of preference while still achieving excellent latency performance.
The power of WebRTC
This foundation enables inventive developers to create streaming platforms which achieve the goal of low latency. For PlayBox Technology, one of our solutions is the cloud-hosted streaming software called Eurisco, designed particularly for occasional and opportunistic applications as well as rolling services.
The use of the cloud means that the software can scale automatically to meet the WebRTC demand. Using adaptive bitrate realtime streaming, it can quickly be customised to provide precisely the service parameters the user requires.
The service comes with ready-made iOS and Android apps and an easy to use SDK to build web players. As a universal streaming player, it supports all the delivery protocols so can be accessed by most consumers.
Most important, its architecture means it can deliver live streams with a sustained latency of around half a second. That meets the industry definition of ultra-low latency, but for the viewer it simply means that the service is transparent and immediate.
Eurisco can be part of a turnkey PlayBox streaming service, which includes multiple graphic layers, delivery to all the popular social media platforms, and sophisticated editorial modes.
Latency is a serious issue in video streaming, in both the contribution and the distribution phases. Reducing the latency makes remote production practical, and delivers the content to the audience when they want it. Development teams are continuing to work on reducing latency, to achieve consistent performance and synchronised delivery to all viewers where practical, while making the service available to all users on all devices.