Deploying 3D TV in 2011
However, television broadcast differs greatly from prepackaged cinema or Blu-ray Disc™ content, having to contend with an infrastructure containing numerous, often independent stages of content repurposing.
How can broadcasters effectively deploy 3D TV channels today?
Delivering 3D Content
Essentially, there are two methodologies for delivering 3D content: frame compatible 3D (“3D-in-2D”) and 2D compatible 3D.
Frame compatible 3D is aimed at delivering 3D content within the bounds of the existing broadcast infrastructure. All broadcast/direct-to-home (DTH) 3D services that are on-air today are based on 3D content that has been created from pre-processing separate left and right eye images in such a way as to appear to existing broadcast equipment and consumerset-top boxes (STBs) as if the 3D content is standard 2D HD content.
Concurrently, there are also industry efforts to deliver full resolution HD 3D TV to the home, typically using methods that enable the same content to be backwards compatible with existing 2D televisions. This is known as 2D compatible 3D. The existing standard for Multi-view Video Coding (MVC), an amendment to the H.264 | MPEG-4 AVC standard, provides aninteresting technology avenue for bit-rate efficient, full resolution HD, stereoscopic (3D) transmission. Scalable Video Coding (SVC) has also been suggested as an alternative to MVC to deliver 2D-compatible full resolution 3D material.
Although the Blu-ray Disc™ Association has announced the deployment of MVC compatiblechipsets in its players, the adoption of this or any other standard that impacts legacy deployed equipment by the broadcast industry would require the replacement of the enormous existing ecosystem and will therefore hinder any short-term 3D deployment.
Distribution to the Home
All of the announced broadcast/direct-to-home (DTH) 3D services that have launched in 2010 are frame compatible 3D. Production equipment is already available, existing 2D broadcast equipment and consumer STBs can be used and new 3D TVs are also on sale, with more models being added throughout the year.
In the majority of announced cases, 3D is packaged into a frame compatible video stream by using resolution sub-sampling (spatial compression) and then multiplexing the “half resolution per eye” images together. The reverse process is employed in the newly announced 3D TV sets, whereby the TV set is aware that the received content is, in fact, left eye and righteye multiplexed images and internal post-processing (including up-sampling and left/right polarization) is done to render the images as 3D. There are many possible spatial sub-sampling methods defined, but the two most common are side-by-side (half-resolutionhorizontally) and over/under (also known as top-andbottom) (half-resolution vertically).
As mentioned above, existing headend or broadcast station equipment can process the frame-compatible 3D content as 2D. Questions about the extra bitrate budget required by the pre-processed content are largely still unanswered. This is essentially dependent on both the content and the efficiency of the compression equipment and, regardless of the method chosen, can add anything from 10 percent up to 40-50 percent on the equivalent 2D picture. It is therefore paramount that any 3D channel launch using frame compatible methods uses the best available compression technology to maximize the consumer early 3D experience and strengthen the business case behind any 3D deployment.
The Ericsson EN8190 HD Encoder provides the best MPEG-4 AVC HD compression in the world. Its programmable platform allows for the use of new inhouse compression and pre-processing technology designed from the ground up, to enable conversion to an all-HD world and the expansion to a new 3D world.
The issues for live 3D Contribution differ greatly from DTH, the biggest being the need for the best possible asset quality (full spatial resolution, 4:2:2 chrominance, 10-bit precision, etc.), to enable for the most effective distribution, post-production and storage, without compromises on bandwidth and legacy equipment existing in DTH applications.
For single operator content coverage, 3D preprocessing can be performed at the event withinor without (2D) HD-capable SNG equipment used for backhaul. This approach has the advantage of enabling 3D contribution using existing equipment, although it requires setting up the whole production on-site. Although it allows for a reduction in the overall bandwidth required, pre-processing the content at the source will limit its value in post-production, distribution to third-parties and future repurposing because of degradation due to spatial sub-sampling.
As a consequence, many operators are therefore likely to deliver a simulcast of full resolution left and right channels as above. This approach has the advantage of delivering the highest fidelity content in a format easily understood by the many stereoscopic production tools - and to be independent from the chosen final method of delivery. However, it also requires the mostcare in order to maintain exact frame- and phasealignment between the left and right eye feeds, in order to reconstruct the 3D image accurately.
With its multi-channel capabilities, Ericsson’s Contribution Encoder and Voyager II provide anatural platform for 3D contribution links, ensuring full control of encoding parameters, exact synchronization and time-stamping of the compressed frames - and the generation of a fully packaged 3D simulcast. Moreover, the fully programmable architecture allows for dedicated pre-processing and the most efficient compression in all 3D coverage scenarios that sufferfrom bandwidth limitations.
Likewise, the new “Simulsynch 3D” technology included in the Ericsson RX8200 receivers ensures that the exact temporal and spatial relationship between left and right feeds is also maintained at the receive end. The exact reconstruction of the input feed allows for the highest quality post-production and distribution avoiding possibly severe reductions in early 3Dcustomer experience.