subscribe

Other Cinema/Large Venue Considerations Update from Display Summit 2018

The afternoon session of October 3rd was dedicated to a number of other technologies and considerations around next generation cinema. Speaker Joe Kane, from Joe Kane Productions, focused on the creation of a single master, not just a single DCP as proposed by EclairColor.

Kane thinks a peak luminance level of 300 cd/m² is fine for the cinema and even in the home since most content is less than 100 cd/m². But you a delivery format that can readily adapt to the particular characteristics of any display it encounters if you truly can get to a single master solution. That is, the projector or display would configure itself for optimal performance, as is done now in the cinema market. But Kane’s idea is to not only let the display adapt, but to let the source content adapt as well.

It seems Kane envisions storing content in an ACES-like format as 16-bit float RGB and processing in 32-bit full float. An output display transform (ODT) then reworks the content to what is needed by the display, and this can also take into consideration the ambient light and adjust the color to maintain the same perception the creator intended.

This concept requires a handshake between the display and source content for optimization and requires the transfer of metadata. This may be okay for an OTT service but does not seem like a very good solution for a broadcast distribution model.

Taking a slightly different tack was Bill Feightner from ColorFront. Like Joe Kane and EclairColor, they would like to see content mastered in the highest HDR quality to serve as the master. Then, using their conversion engine, any kind of deliverable can be created for distribution to cinemas, TVs, mobile devices – anything. The key is to try to maintain the same perceptual image quality and creative look over this wide variety of playback devices.

Feightner used the graphic below to explain the process. At the front end, content from cameras, VFX, pre-recorded sources and more in a variety of formats is ingested via an input transform into the ColorFront Virtual Color Grading space (he was not clear exactly what this is). This is where the content is graded. Certain looks can be used by the colorist if desired or created from scratch. An additional module is called the perceptual transform. This takes into account the intended brightness of the target display, primary colors and surround illumination to transform the colors and luminance of the master to one that preserves the artistic intent from a perceptual point of view. The final process is to then encode this via an output transform for a specific deliverable format, as shown in the graphic.

ColorFront 1

Feightner says that the work flow can function in reverse as well with deliverable content going through an inverse output transform to get it back to a linear virtual grading space.

The graphic below shows how this technique is already in use in television to create both an SDR and HDR deliverable.

ColorFront 2

Harman International was represented by Geoffrey Christopherson, who stood in for Daniel Saenz. This presentation was all about how Harman, working with Samsung’s Audio Lab, has been developing the display solutions for the Onyx cinema screen. There has been an extraordinary path of development with a tremendous amount of progress in a relatively short time frame.

The main issue that the team was trying to solve is delivering exceptional audio quality throughout the theater without the ability to project sound through the LED screen the way it is done with projection systems. The whole idea is for voices to sound like they are coming from the direction of the lips on the screen, not from above the lips.

It turns out that human hearing is very good at localizing sounds horizontally by comparing the frequency, time and phase cancellation of sound waves. We use different cues to localize sound vertically such as the forehead and upper ear lobe for frequencies above 1 kHz and the chest cavity for sounds around 900 Hz. That means any “de-elevating” approach must focus on these frequencies. The main solution is a reflector horn placed on the wall a few rows back that projects sound at different frequencies at different angles to bounce off the LED screen. Additional reflector horns were then added to improve frequency response. The audio is adjusted and time-delayed with DSP processors so as to not create any echoes and so that the sound appears to come from behind the screen.

Harman 1

Christopherson then went on to explain that the developed a set of Finite Impulse Response (FIR) filter to do the system equalization (EQ) with zero phase issues. That means the sound should be uniform throughout the theater.

Standard JBL sculpted surround speakers complete the audio installation, along with all the audio processing and amplification systems.

Switching gears, Jeremy Hockman from Megapixel Visual Reality came to talk about some of the challenges and solutions he has learned in creating very large multi-megapixel installations. He cited a casino that they recently built the display solution for that was over 100 megapixel with miles-long cable runs and non-standard raster screens with odd aspect ratios. He had to manage over 11 TB of data per second in an uncompressed format. That’s a big challenge that not everyone is thinking about.

He found that the video protocols are just not up for this task and he relied on 12G SDI connections to get the job done as the handshaking between HDMI and even DisplayPort is not robust enough for reliability over all these displays. With so many pixels a single processor can’t handle the outputs. That really complicates the installation when you have to manage lots of sources of content, lots of processors and routing to lots of display.

Another issue is mapping/rasterization. Suppose you want to create a super wide banner display? Traditional approaches require you to place the segments in conventional 4:3 or 16:9 rasters, which is inefficient and a waste of bandwidth. We need a better solution, Hockman believes.

megapixelVR

He posed several other questions.

  • How do you make screen adjustments when the display is a mile away and you can’t see the screen?
  • How do you ensure consistent colors across all these screens when most LED panels are not calibrated, when there are viewing angle dependencies, and when D65 does not equal D65?

To address some of his concerns, he suggested that display makers should include native fiberoptic support and he called for standardization on methods to control uniformity in LED screens. For color management, stick to standard color spaces like 709, sRGB or P3 and ask the display providers for their off-axis uniformity data. Also, demo the competitive display solution in the physical context – i.e. in the lighting environment and viewing positions where they will be used.

Steve Paolini from Telelumen provided a novel talk and really good demo related to an expanded color gamut. Telelumen is a light source provider that can provide LED modules to create any spectral power distribution the client wants. They have a 8-color and 16-color standard illumination sources that they sell to healthcare, retail, workplace, lighting, horticulture, movie and TV markets.

RGB is the standard for content capture and display, but additional primaries would allow for an increased color gamut and a flexible efficacy. But to move beyond RGB, cameras, illumination sources, processing software and displays would all have to adopt the new schema to be effective.

However, there is a good reason to consider more primaries. For example, most reds are a bit orangey and Caucasian and Asian skin tones have a lot of red, so can look ‘a bit off’ many times.

Paolini then started up a demo in the conference room composed of the eight different LED light sources as shown in the chart below. He said most cameras will not be able to differentiate between the last two red and blue colors or see the cyan color very well either. He said if it were up to him, he would add the deeper red to complement the standard red primary first to help with skin tones. Next would be the cyan color to really help shown Mediterranean or Caribbean seas. There were many questions and great interest in this demo.

Telelumen 1Telelumen 1

Mike Caputo from Radiant Vision Systems then described some of the metrology challenges with large format LED screens – and offered their solutions. He started by noting the trend in displays is toward emissive displays with more and smaller sized pixels. So how does that impact uniformity, he asked? Pretty badly, he answered.

For one, LED screen makers start by binning the LEDs but there is always a tolerance even on tight binning specs. These slight variations are especially visible when displaying a white or gray screen. This can show up as a mottled sort of look. As Lude pointed out, there can also be tile-to-tile and cabinet-to-cabinet variations that are quite visible. Calibration using their instrument can fix this.

Caputo recommends using their 29Mpixel I29 imaging colorimeter (which has filters to replicate human vision sensitivity). This allows an image of the entire screen or smaller parts to measure the luminance and chromaticity on a pixel-by-pixel basis. The instrument is used during production, after module assembly or after final installation.

One key patented innovation is their test pattern, which consists of a dot-matrix pattern that is shifted over time. This data is used to understand the luminance and chromaticity variations on a pixel basis and to develop a correction matrix, which can be fed back to the LED display controller.

This instrument can also be used on OLED screens that have a variety of sub-pixel patterns. Typical results are shown below. Measurements and calibration can be done in minutes. – CC

Radiant 1Radiant 1Radiant 1Radiant 1