A Great Success – World Premieres Included!
The seminar programme was accompanied by an exhibition where the Mediagroup member companies and participating firms including Audinate, Concept A, Georg Neumann, KS Audio, KS Digital, Müller-BBM, Pinguin Ingenieurbüro, Schoeps Mikrofone, Sennheiser electronic, Smyth Research, Stage Entertainment, VDT, and Zactrack presented their current product ranges.
Both the lecture and workshop programme and the exhibition showed real innovation. For example, Sennheiser electronic chose the Banz seminar as a platform for the world-wide launch of a prototype of their new digital wireless microphone systems! Also for the first time, Smyth Research presented a professional version of their Smyth Realizer binaural headphones-playback system to the public. Another highlight was the first collaboration of the Mediagroup’s Vivace system with Zactrack’s automatic follow system that enables moving microphones to be tracked, for example, in the theatre. SALZBRENNER STAGETEC MEDIAGROUP also presented a prototype product in the exhibition, namely a groundbreaking control surface, based primarily on multi-touch screens, for its theatre and conference console TRIAGON. The 3D video representation of iVu from Mediagroup member company NVS was available during the seminar breaks and in the evenings at the cinema in 3D.
The longest journey was undoubtedly made by Aidan Williams, co-founder of Sydney-based company Audinate. In his paper on AVB and Dante, he introduced a vision of the future of audio networking. This illustrated the close cooperation in the audio-IP networking field, building on the partnership Audinate and SALZBRENNER STAGETEC MEDIAGROUP entered into early in 2011.
Another truly impressive highlight was the paper by Vyacheslav Efimov, deputy general director of the State Academic Bolshoi Theatre of Russia in Moscow. He illustrated his overview of the extensive renovation works at the theatre with a large number of current photographs.
Both seminars were lead by Martin Wöhr, VDT. Thanks to his breadth of experience, he not only introduced each presentation and workshop but also stimulated a lively discussion about each topic. In addition to the speakers and workshop moderators, four live interpreters for German, English, French and Russian plus 20 other people were involved in setting up and dismantling, exhibition administration, logistics and overall organisation. They made sure the event was a resounding success for almost 100 participants of the two seminars.
“We draw participants from eleven countries - not only from central Europe but also from Australia, Brazil, China, Russia and the USA," summarised Stephan Salzbrenner on Friday evening. "During the event, we have shared know-how and technologies from the most up to date developments in the pro audio scene. We are very pleased about the successful revival of the Banz seminars after a hiatus of several years and despite the demanding current economic climate. Thanks are due once again to the sponsoring companies and of course, the speakers.”
The Presentations and Workshops in Detail
The Eurovision Song Contest (ESC), which took place in Düsseldorf, Germany, in May, was one of the most remarkable live-TV shows of the year.
As the host broadcaster, the norther German NDR provided technical support and broadcast the contest. Audio engineer Ulli Fricke, who works for the NDR in Hamburg, acted as head of sound for this international production. Opening the broadcast seminar at Banz, he spoke about the technical requirements of the ESC, supported by a colourful presentation with plenty of photos and superlatives. The sheer scale of the production, the physical size of the stage and extremely fast changeovers between live acts required not only consistent microphone management but also meticulous microphone-stand logistics. Besides joking about the new profession of ‘microphone-stand operator’, in his 1-hour paper Ulli Fricke discussed most of the audio tasks involved. These included studio building from standard double containers, doing the TV sound mix on AURUS consoles, the FOH-mix, the commentator and intercom systems as well as the in-ear rehearsal room used for setting up the monitor mix, and the unusual location of technical equipment in open containers hoisted up into the arena roof due to lack of space elsewhere.
The next paper by Dr Günther Theile described the latest progress of the ITU surround standards towards 3-D audio reproduction (i.e. 5.1 audio plus height information). Dr Theile compared the characteristics of various playback techniques from 2.0 stereo to 5.1 surround to 9.1 AURO-3-D, wavefield synthesis and binaural techniques and described the creative possibilities.
For example, AURO-3D in comparison with conventional 5.1 audio provides not only additional height information but improved listener envelopment, more natural surround sound and clearer representation of reverb depth. The next part of the paper detailed recording techniques appropriate for 9.1 audio and examined meaningful applications. Günther Theile compared the characteristics of various playback techniques with the demands 3D TV or cinema pictures make. Enhanced depth perception and envelopment are particularly important in this context, which is why 9.1 audio (AURO-3D) is especially relevant in this case. However, his summary made it clear that the 9.1 technique is still very new and more practical experience is required. When in doubt, at least for the present, a well executed 5.1 mix makes more sense than a poorly executed 9.1 mix. For anyone interested in delving deeper into the subject of 9.1 recording, Dr Theile pointed them at the www.hauptmikrofon.de website, which contains comprehensive information on various surround techniques.
The SCHOEPS company is highly committed to the search for optimal microphone techniques for the new Auro-3D and 9.1 formats, which is why Dr Helmut Wittek, Director of engineering at SCHOEPS Mikrofone GmbH, co-authored Dr Theile’s paper.
Dr Wittek also led a supplementary workshop on the same subject at Banz. Small groups of participants were given the opportunity to compare various recording and production techniques from stereo and 5.1 to 9.1. Illustrated with numerous aural examples, the workshop focused on the enhancement of spatial perception using height information and on the difference between various recording techniques.
The 9.1 system – a 5.1 surround setup plus four extra speakers on a raised ring – was provided by Sennheiser. The setup included nine of the new Neumann KH 120 studio monitors and two of the brand-new KH 810 subwoofers – the latter arrived from the factory just in time for the seminar.
Dr Stephen Smyth of Smyth Research also dealt with the issue of surround sound. His presentation concerned the monitoring of a real reference surround-sound loudspeaker system over headphones. The concept is based on measuring the acoustic transfer function from each of the real loudspeakers in a room to the listener's two ears. These transfer functions are converted to audio filters (binaural room impulse responses) which can then be used to filter the audio signals, in real-time, in an audio processor.
The resulting filtered audio signals, when presented over headphones, recreates for the listener the experience of listening to the audio signals through the original real loudspeakers in the original room. The virtual loudspeakers, heard over headphones, emulate the real loudspeakers heard normally. One problem is that the shape and location of each person's ears are unique and therefore, to achieve an accurate emulation, it is necessary to measure the unique transfer function of each individual listener in the reference surround-speaker environment. To this end miniature microphones are placed in the ear canals of the subject and binaural recordings are made of swept-sine signals from each of the loudspeakers. From these recordings, made at three different head orientations, the binaural room impulse responses for an individual listener are determined. Together with head-tracking, the personalised transfer function now enables accurate directional reproduction of surround loudspeakers over headphones.
So much for the theory – the real test is always in the listening. So, during both seminar sessions participants were given the opportunity to have their individual transfer functions measured in the local surround-monitoring room and then to listen on headphones via the Smyth Realiser emulation. Listeners had the ability to do a direct A/B comparison between the audio presented over the virtual and real loudspeakers. In this context, the professional version of Realiser, equipped with professional audio interfaces, was displayed for the first time. The official product introduction is scheduled for Q1 of 2012.
No major production, especially in TV broadcasting, happens without the use of intercom facilities. Driven by the move to digital technology and networking, recent trends have moved away from small islands towards campus-wide networked intercom systems. Jürgen Malleck from DELEC’s national and international sales department introduced the largest current digital intercom system design project for an entire broadcast complex in Great Britain. He described the basics of digital intercom and explained various failsafe and security strategies. Following his paper presentation and during seminar breaks Malleck explained how to create a modern complex network made up of routers and subscriber units using a DELEC demonstration system as an example.
Another main topic of the broadcast seminar was loudness metering. Who could be better qualified to present it than Florian Camerer, audio engineer at the ORF in Vienna and one of the pioneers in this area? As chairman of the EBU’s P/LOUD group, Camerer was instrumental in the development of the EBU Loudness R128 standard. After introducing the reasons which led to the development of loudness measurement and gain adjustment he introduced the three main parameters; programme loudness, loudness range and maximum true peak level and explained their application. Using examples, Camerer described possible working techniques and strategies for introducing loudness metering in the broadcast field. An interesting aspect was that the loudness-range parameter might provide guidance as to whether programme dynamics should be compressed or not. A crucial point of using loudness metering is source-signal normalisation. Although in principle it would be possible to achieve loudness compensation in the consumer environment based on supplied loudness metadata, normalisation is preferable. Following this approach impacts on the creative aspects of the mixing process and results in better quality end results. Europe is already in the middle of the transition to loudness normalisation. It is looking promising that a decades old problem can finally be solved satisfactorily.
Ralph Kessler from Pinguin Ingenieurbüro, in his workshop on the same subject, presented the practical effects of consistent loudness metering to small groups of participants. Using research from 2003, he first demonstrated the drastic level changes TV viewers experienced at that time when levels were set using QPPM (Quasi Peak Programme Meter) only and cable and satellite providers suffered from additional distribution problems. These level differences occurred not only between programmes on the same channel but in particular when zapping through channels. Then Kessler played the same signal but after EUB R128-normalisation and was able to prove that the new policy could definitely mitigate the original problem inherent to living-room environments if the policy is adhered to over the entire chain from production to transmission and distribution. The disadvantages of automatic loudness metering were not neglected. These arise, for example, when the signal consists of ambience or background music (e.g. chill-out music) only. Operators should not leave everything to the machine but also use their ears. One major advantage of loudness metering is that the recommendation works consistently across all genres. One of main problems with QPPM-metered programmes is that a specific reference level is recommended for each genre and thus practical application is much more difficult. Using audio samples and the possibilities offered by an AURUS console linked to a PINGUIN loudness logger, one could differentiate easily between material where automatic loudness normalisation is appropriate and material where manual intervention would still be desirable.
In another demonstration workshop, Jens Kuhlmann, service and training engineer at STAGETEC in Berlin, illustrated the editing possibilities for embedded audio and Dolby-E® streams on the NEXUS audio-routing system. The current HD-SDI card for the NEXUS is used as an insert module within the video path and enables all SDI standard formats up to 3G to be edited. The de-embedder side of the assembly reads the audio part of the signals and feeds it to the NEXUS internal bus for further routing, making it available everywhere on the NEXUS network. Conversely, the embedder side of an HD-SDI card inserts any audio signal from the NEXUS system into the video stream. Many features ranging from bypass, clear and replace modes to optional sample-rate converters and decoder interfaces (required for editing audio embedded into asynchronous video signals) to video delay are useful assistants for handling the complex everyday requirements in the networking and encoding jungle.
NEXUS offers the same degree of flexibility when it comes to handling Dolby-E® signals. In addition to transparent forwarding, the system can be fitted with fully integrated encoder and decoder cards. The user controls these components using an intuitive GUI with network-wide access to the numerous parameters and metadata of all the encoders and decoders on the network. For example, the de-embedder side of the NEXUS HD-SDI card extracts the Dolby E® signal and feeds it to the bus like any other audio signal. From there, the signal is forwarded transparently to the Dolby-E® decoder card where it is decoded for subsequent editing. Dolby-E® encoding handles the signal in reverse order, finally embedding it into an HD-SDI stream. The NEXUS can even process asynchronous video with embedded Dolby E® data.
Since sample rate converting the HD-SDI signal would destroy the Dolby E® stream, a different approach is used. The HD-SDI card and the Dolby-E® card are interconnected directly, enabling the Dolby-E® stream extracted from the asynchronous video signal to be forwarded immediately to the Dolby-E® decoder. In this case, the HD-SDI card provides the necessary clocking information. The decoder decodes the audio signal, which is synchronised subsequently to the Base-Device clock as a discrete PCM signal using a sample-rate converter. To the best of our knowledge, this solution is unique in the world of HD-SDI and Dolby E®!
At the end of the workshop, René Harder, executive assistant at STAGETEC in Berlin, picked up one of the main points of the broadcast seminar and presented the new Broadcast Loudness Metering option for the NEXUS which comply with EBU recommendation R128 . This is where the new NEXUS Base-Device CPU plays a key role. It implements the standards-compliant measurement algorithms and supplies the data for display on the GUI.
The broadcast console is an important element in the broadcast studio. But what is a broadcast console exactly and what tasks must it be able to perform? Dr Klaus-Peter Scholz, co-founder and CEO of STAGETEC Entwicklungsgesellschaft in Berlin, talked about the many features the ON AIR 24 broadcast console has to offer. To cater for the console’s diverse applications, each unit is designed individually using appropriate small modules. On the hardware side the range spans from 4-channel units featuring a monitoring panel, for use as a single-operator console, up to a 24-fader unit with touch screen control. Using intuitive control software, designed specifically for touch screen operation, this module provides access to additional parameters, especially for more advanced users with sound engineering experience. There are many configuration options – from pure hardware consoles to desks with touch screen support, with or without a hardware panel for monitoring, or even featuring full configuration and remote-control options using the comprehensive remote and administration software introduced in autumn 2010. Together with sophisticated granular user rights sharing can be tailored not only to the console specific to each installation but also for each individual user! The audio processing takes place on a rack mounted card hosted by a NEXUS Base Device. So you can rely on the proven concept of a combined mixing-console and audio-network system which is used on all other STAGETEC consoles. Two different console units were showcased at Banz: a small solution with 8 faders plus touch screen and a portable unit with 12 faders and a monitoring panel.
In their paper, Sven Boetcher, marketing manager at Sennheiser Pro-Audio, and Gerrit Buhe, head of electronics and signal-processing development at Sennheiser’s Professional Systems department, presented basics facts about digital wireless microphones. Unlike many other audio technology fields, wireless digital transmission still presents numerous challenges in comparison with its analogue counterpart. Within the boundaries of what is physically and legally possible the challenge is to find a balanced compromise between the audio-data rate and audio quality, immunity to noise, operating time and latency. These parameters are interdependent. For example, increasing the transmission data rate would improve the audio quality and might possibly do without data compression; on the other hand, doing so would also increase the minimum signal-to-noise ratio required at the receiving end, which in turn might limit reliable coverage depending on the interference present. If one uses a modulation scheme which offers more effective utilisation of available bandwidth this reduces the power efficiency and thus reduces the battery operating time. A strong channel coding would allow for improving error correction but would also increase the required data rate to maintain the same audio quality. Choosing a lower data rate for transmission by using an audio codec would increase the bit-error rate requirements and any additional latency must be accepted. As Gerrit Buhe emphasised, comparison of digital wireless microphone systems is very difficult due to this complex variety of technical design options. Digital wireless microphones are a very young branch of professional audio technology and are still in the midst of the development phase. After Sven Boetcher’s and Gerrit Buhe’s paper, participants were keen to gain further insight into the current technical developments.
The importance of AVB networking is growing in both broadcast and theatre / live applications. Aidan Williams, founder and Chief Technology Officer at Audinate, explained the basic principles of AVB and gave an overview of the technical requirements and possibilities. He introduced the three core AVB standards and their functions: precise time synchronisation, bandwidth reservation in a bridged LAN, and traffic prioritisation in Ethernet switches. AVB transport protocols exist for both Layer 2 (IEEE 1722) and Layer 3 (IEEE 1733). IEEE 1722 is a non-IP transport carrying FireWire frames in Ethernet packets in a single LAN. IEEE 1733 allows AVB services to be used with the standard TCP/IP Real-time Transport Protocol (RTP), supporting routable networks and a providing a transition path from existing non-AVB equipment to AVB-capable equipment. When deploying AVB, compliant network switches must be used wherever AVB services are required. When deploying into existing infrastructure, separate island AVB-Clouds will be formed unless the backbone equipment can be replaced or upgraded to support AVB. In conclusion, Aidan Williams detailed the features Dante offers for establishing an AVB-compliant network. Dante is supported by many pro-audio manufacturers. Late in 2010 a partnership was formed between Audinate the SALZBRENNER STAGETEC MEDIAGROUP to support Dante in theatre and broadcast products.
The theatre and live sound seminar was launched by Gunter Engel, system developer at Müller-BBM, with his paper on emulating virtual acoustics using the VIVACE digital room-enhancement system. A virtual acoustics generation system must perform several roles: It is intended to enhance the existing room acoustics, to provide the listener with a suitable spatial dimension of the sound source, provide a good mix and to produce good envelopment at every seat. In addition, virtual acoustics support the musicians on stage and are also perfect for generating sound effects thanks to high-quality audio processing, a mixing matrix and a more or less complex loudspeaker installation. Gunter Engel described the possibilities and limitations of virtual acoustics using a large number of example installations ranging from small venues such as the St. Moritz Art Masters to current fixed installations such as the Felsenreitschule theatre in Salzburg or even outdoor applications such as Klassik am Odeonsplatz (Classic in Odeon Square) in Munich. Particularly noteworthy was the system implementation in a sports arena for the production of Olivier Messiaen’s opera “St. Francois d’Assise” in Spain. During the subsequent workshop, Gunter Engel demonstrated to small groups of participants what the virtual room-enhancement system sounds like and the opportunities it offers. In addition to the virtual room enhancement that had already been covered in the paper, Engel now concentrated on the effects capabilities of the system. With the help of a saxophonist, he demonstrated the difference between unprocessed and enhanced room acoustics and the result of different extreme settings. Beyond that the system also offers the possibility of moving sound sources across room walls and ceilings, freely or using programming, and of integrating them into the room acoustics at the same time. This feature can be used both for adding sound effects and for accurate directional sound reinforcement. Source positioning can be done in three ways: using a computer mouse and a three-dimensional model of the hall or, first demonstrated in Banz, by using a direction pointer in the actual room or, also demonstrated in Banz for the first time, fully automatically using the Zactrack tracking system. In order to be followed the performer was equipped with a small transmitter. With the help of the Zactrack system the current position of the performer is determined. Using the OSC protocol, the positional information is then forwarded to the VIVACE system, which computes and implements the audio processing required. An advantage of this type of motion tracking is that there are two open systems. Thus, the positioning information generated by the Zactrack system can also be used for other purposes, for example for automatic spotlight tracking.
In order to offer this workshop, Peter Maier from Concept A had modified the acoustics of the room considerably. The reverb time had been significantly reduced, allowing for a clearer (though less typical) demonstration of the VIVACE system. A total of 24 KS-Audio CL208 speakers plus eight KS-Digital C8-Coax speakers were installed on the walls and ceiling on a tailor made truss system to reproduce the VIVACE signals.
Kai Harada, freelance Broadway sound designer, and Michel Weber, Sound coordinator at Stage-Entertainment, trod new paths not only with the content of their paper but also with the way in which it was presented. Kai Harada spoke live over a video link from the USA supported by Michel Weber in Banz. They talked about the development of an approach to modern musical-sound design – an area which has been professionalized rapidly in recent years. Kai Harada described the various steps from tender to the first show with a focus on the differences between European and US productions. For example, in the USA the audio system is typically set up and tested in a separate hall before installation in the actual musical theatre while in Europe, the audio system goes directly into the target theatre for on-site tweaking. Other interesting aspects of the paper, accompanied by many illustrations, included the design and calibration of the audio system, possibilities for hiding loudspeakers in the scenery and microphones on performers and using scene automation for audio. Following Kai Harada’s presentation, Michel Weber talked about the production details of Hinterm Horizont (Beyond the Horizon), a musical show he had produced in cooperation with Kai Harada in Berlin. Michael Weber described the many practical constraints affecting, for example, the equipment selection for a particular show because existing systems are to be used. However, Hinterm Horizont was equipped with first-class technology and excellent audio quality was ensured by the use of AURUS and NEXUS systems. Since earlier this year, AURUS and NEXUS are also to be found in the premiere location for musicals: on Broadway in New York supporting the latest show, Follies – with sound designed by Kai Harada.
Following on from this, Christian Fuchs, theatre applications manager at the SALZBRENNER STAGETEC MEDIAGROUP, led a workshop about the scene-automation features of the AURUS, which are used mainly in theatre and musical productions. Christian Fuchs’ demonstration followed the chronology of setting up a new production. First, he described the general procedure for creating a scene list. The easiest way of doing this is to take the simplest of full snapshots with many parameters excluded initially from automation using the Isolate function. The snapshots are refined gradually, so that, depending on the specific application, full-function, channel-function, or function snapshots are created. A global or selective copy/paste function is used to change parameters stored in a snapshot. Changes made in this way can be absolute or relative. In any production, there are always a number of scene events which need to be triggered separately. This can be done using MIDI, GPI or machine control messages initiated by a musician, the conductor or the stage manager directly. Even NEXUS routing can be changed by the scene list. Finally, the scene list and the logic functions of the NEXUS can be combined so that control commands can be different depending on what is happening on the stage at a specific moment.
The restoration and reconstruction of the historical Bolshoi Theatre in Moscow is recognised internationally as a flagship project of our time. Vyacheslav Efimov, deputy director of the State Academic Bolschoi Theatre of Russia, talked about the history and current renovation of the house to a fascinated audience.
The reconstruction was undertaken not only to modernise but also for paramount safety reasons because the old building was threatening to subside due to its weak foundations. With great attention to detail much emphasis was placed on restoring the original state of the building, including the original décor, to conserve the theatre not only as a playhouse but also as a cultural monument. At the same time, the building was enlarged with many additional rooms.
Another major aspect of the paper was the architectural acoustics designed by Müller-BBM. The objective was to conserve the historical appearance of the hall while bringing it into line with the acoustics standards of today.
The impressive scale of the project was illustrated with many current photographs of the Bolshoi Theatre. The audio and video technology was designed from 2007 on by the SALZBRENNER STAGETEC MEDIAGROUP. The actual installation works began in 2009 and were performed by the MEDIAGROUP in cooperation with the Russian company Modul T. The restored Bolshoi Theatre reopened officially on 28th October after several years of renovation.
The subject was taken up again later by Dominik Haag, software-development coordinator at the SALZBRENNER STAGETEC MEDIAGROUP. As a project manager, he is involved in planning the new systems at the Bolshoi Theatre where he is focused on designing a new stage-management system made up of DELEC ORATIS and MediaControl components. The functionality required by the Bolshoi Theatre goes far beyond that of a standard stage-management console. The networked system components are installed throughout the house and provide intercom functions, cue-light control, number displays, subscriber units, monitor-program selection, player control, and serve as a distributed audio matrix.
Both the combined systems support networking and have a distributed structure. While the DELEC components provide all audio-related functions including audio networking, audio processing, audio interfaces and subscriber units, MediaControl fulfils configuration and control functions. The Bolshoi Theatre MediaController has a total of 17 units connected over a fibre-optic network. These can be operated via a variety of keypads, touchscreens, or keys on ORATIS subscriber units. Particular attention was paid to handling failures. Each unit can operate locally as an island in the event of, for example, a network failure. In addition, the system includes SNMP-enabled fault monitoring and logs all SNMP traps to an error-management database. This reduces administration significantly for this very large system.
The paper by Dr Helmut Wittek, director of engineering at SCHOEPS Mikrofone GmbH, refreshed the delegates basic knowledge about microphones. He gave a detailed presentation of the SuperCMIT digital shotgun microphone which, due to innovative beamforming exhibits enhanced directivity and diffuse-sound suppression of up to 15 dB. Suggested applications include not only classical shotgun-microphone uses such as film sound or sports broadcasts but also stage productions. The microphone either connects directly to a digital console via an AES-42 interface (mode 1) or can use a special Schoeps adaptor with phantom-power and D/A conversion. During the breaks and after the end of the demonstration, participants of the Banz seminar had the opportunity to try out the Schoeps SuperCMIT in the reverberant vauled corridors of the monastery.
The theme of true-directivity sound reinforcement is by no means new, after all, the patent for delta stereophony was issued more than 30 years ago. However, in recent years it has received new impetus with the introduction of automatic motion-tracking systems. Two such systems, TiMax and Stage Tracker were compared at the Schwerin Castle Festival and the results presented in a paper at the Banz Monastry event by freelance audio engineer Martin Wurmnest who was working in Schwerin for Neumann & Müller Verantaltungstechnik, and John Schröder, head of audio at the Mecklenburg State Theatre in Schwerin. A particularly interesting aspect was that the authors discussed the possibilities from the sound-engineer’s perspective. Setting up a motion-tracking system requires extra effort initially. The operator needs to subdivide the stage area into zones in the control software. Depending on the system, this can be cumbersome. Additional installation effort is mainly related to installing the antennas required to determine the physical positions of the individual actors . The more antennas used the better unusually shaped stage areas can be subdivided into zones; on the other hand, this may complicate the initial calibration. In conclusion, Wurmnest and Schröder noted that they would not want to work without automatic motion tracking for comparable productions in future. The benefits clearly outweigh the additional effort. One of the most renowned seminar delegates Gerhard Steinke, co-author of the delta-stereophony patent was full of praise for the excellent job Wurmnest and Schröder had done in the service of theatre sound reinforcement as well as for their great paper.