subscribe

Immersive Audio Debuts at I/ITSEC 2016

Two companies showed immersive audio demos at I/ITSEC 2016 – which I think is the first time such technology has been shown at this event. Barco and JVCKenwood offered two different approaches to the market.

Barco, for example, created a space that was about 15’ x 15’ (4.5m x 4.5m) and arranged 20 speakers around the room. This is based on the Iosono Core technology acquired in a recent purchase. Isono is an object-based audio format, but it can be rendered locally based upon the number, type and position of speakers in the room. Barco said they can easily render to 5.1, 7.1 or stereo headphones as well.

Barco performed a demo with helicopter and other sound sources coming from multiple places in the room to very good effect. The downside may be the requirement to author the audio in the Iosono format and the need to have a processor on site to decode and rerender the audio. The advantage is a more immersive environment – which is the ultimate goal of training, after all.

JVCKenwood showed a headphone-based immersive solution. In their demo, the user must first get calibrated. This is done by having the user place small microphone at their ears to receive a series of calibration sounds from five speakers surrounding the users. This measures the sound field as captured by each person’s ears. This profile is apparently quite different from person to person and is used to adjust the sound field delivered to the headphones for a calibrated and personalized immersive experience.

JVCKennwood 1

As with the Barco demo, I was then shown some video that featured a variety of moving sounds that definitely gave sense of immersion and localization of the object’s sounds.

The advantage of the JVCKenwood approach is the ability to create a personalized immersive audio from a 5.1 sound track. The disadvantage is the need to do personal calibration and the headphones-only playback option. – CC