Will the Cloud ever be fit for live production?

Will the Cloud ever be fit for live production?

Historically, dedicated hardware has been the only way to achieve professional-grade, real-time media transport, processing and monitoring functionality. However, that is beginning to change with the emergence of products with an increasing proportion of their functionality provided by software on the market. Talk is now turning to moving to the Cloud for live production.

advertisement

Although this transition is currently some way off, constant developments in public and private Cloud mean that Cloud delivery could soon be commonplace.

While Cloud processing and delivery have proved themselves for non-time-critical asynchronous transactions, like file sharing or transcoding, current limitations such as the Internet’s inherent variable speed and potentially loose security mean it isn’t yet suitable for live production requirements.

If Cloud technology is ever to be used for this purpose, it first must overcome these limitations.

Improving public Cloud delivery

Latency

Low latency has always been very important to the broadcast industry, particularly live production, as it ensures the best synchronization between people and equipment. However, in terms of both processing and transport latency, the Cloud is currently unable to offer the same quality as specialized hardware and dedicated networks deliver today.

The processing latency is largely the result of CPUs struggling to process in real-time the huge volume of data required for video processing. Fortunately, this challenge can be overcome thanks to field-programmable gate array (FPGA) acceleration capabilities which are becoming part of some Cloud providers’ offering.

advertisement

Quality of service

The quality of service is another obstacle for the use of public Cloud for live production. As the transport of packets over the Internet is less reliable than over a private IP network, there is a risk that not every IP packet will reach its destination.

Some of the traditional protection mechanisms for media networks, such as SMPTE ST 2022-7 (dual path protection), are not really effective as precise Internet routing cannot be controlled. At the same time, TCP, which is typically used for Internet applications, is too slow as every packet needs to be acknowledged (ACK) to the sender by the receiver. So, the industry has been working on negative acknowledgement (NACK) approach, whereby the receiver only tells the sender if a packet hasn’t arrived. The RIST (Reliable Internet Stream Transport) standard, created in 2017, is the first non-proprietary of such protocol.

Security

While security was one of the major concerns when adopting Cloud-based systems, heavy investment by leading Cloud providers means that the risk is now much lower than it was. In fact, Cloud solutions now typically boast secure infrastructures and encryption which makes security less of a worry. 

Bandwidth

A lot of bandwidth and I/O processing capabilities are required for professional quality video and, as yet, public Cloud providers are unable to run real-time uncompressed video in and out of their infrastructure. This puts a constraint on the media functions that can run successfully in real-time on a public Cloud.

advertisement

Connectivity

If Public Cloud delivery is to become the norm, it is clear that signal and media function orchestration will be fundamental to the proper functioning of the entire workflow.

In a baseband world, the workflow is largely determined by the physical location and connectivity of equipment and the core router. In IP, that connectivity is logical – in other words, the physical connectivity of equipment is typically in place, but it is the control layer that determines how the media flows between the various pieces of equipment.

However, as media functions become virtualized, workflows involve connecting instances of software across or even within software-defined platforms. If that software is running in the Cloud, it may even involve spinning up and tearing down instances of the media functions, based on the processing capacity required.

This hints to a totally new role for orchestration systems: they need to become software and virtualization-aware, otherwise the virtualization benefits cannot be realized.

advertisement

Private Cloud

The dedicated nature of private Cloud overcomes a lot of the current limitations of public Cloud, such as performance, reliability and security, can be addressed. In a private Cloud, the equipment itself need not be running on COTS hardware at all. It could also be bespoke hardware or software-defined platforms – which still deliver the best performance for broadcasters, especially for video processing.

advertisement

In its simplest form, a private Cloud is not much more than a data center on the broadcaster’s premises, where signal processing and transport equipment is pooled.

Currently, the equipment is usually owned and managed by the broadcaster. However, this along with the management of the infrastructure can be handed over to a service provider, which then provides the equipment functionality as a service to the broadcaster.

With the current performance of dedicated IP networks, it is totally possible to locate a live-signal processing data-center quite some distance away from the broadcaster’s facilities - effectively in a “Cloud of Real-Time”.

The future of live production

For most real-time broadcast media transport, processing and monitoring, software-defined platforms built on hardware optimized for performance currently provide the best approach. Private Cloud is now an alternative option, allowing broadcasters to use specialized software-defined platforms or COTS hardware, both on-site and off-site.

Technology is evolving at pace and it’s only a matter of time before COTS and public Cloud usage become a reality for real-time broadcast production.

advertisement

SVT and Infront set a new standard for sustainable event TV production at Åre 2019

advertisement