subscribe

Introducing an Automultiscopic Display

A team of researchers at the USC Institute for Creative Technologies (Playa Vista, CA) have developed a system that captures videos in a unique way and then presents full sized images of people on a so-called ‘automultiscopic’ display. The term automultiscopic is used to define a display that allows multiple users to view 3D content simultaneously, without the need for glasses.

A recent publication by the team is entitled ‘Creating a life-sized automultiscopic Morgan Spurlock for CNNs “Inside Man.”‘ A copy of this brief article is available on-line and can be found here.

The production of an automultiscopic image begins with capturing video of the subject. Done while the subject is uniformly bathed with intensely bright light, the capture is accomplished using 30 Panasonic X900MK 60p consumer cameras spaced over 180°.

Although 30 is a goodly number of cameras, it would, none-the-less, require a vastly greater number to capture light rays from every possible angular direction. Clearly this is a practical impossibility. To address this challenge, the team adopted a software-based approach that produces an image of similar quality through the use of a new view interpolation algorithm.

The execution of the algorithm is distributed among multiple computer systems. More specifically, six computers are used to render the projector images. Each computer contains two ATI Eyefinity 7800 graphics cards with 12 total video outputs. Each video signal is then divided three ways using a Matrox TripleHead-to-Go video HDMI splitter.

The interpolation algorithm builds on work previously reported in the scientific literature that uses optical flow to resample a sparse light field. The first step in this process is to synchronize the videos from the cameras to 1/120th of a second by aligning their sound waveforms. The spatial flow correspondences between the videos is then computed pair-wise using what is called GPU optical flow. Since each camera pair is processed independently, the computations can be executed in parallel. As a result, it is possible to greatly reduce the amount of time required to process the data as compared to using a conventional means of multi-camera stereo reconstruction. The view interpolation algorithm maps images directly from the original video sequences to all of the projectors and does so in real time. The process is represented as easy to scale and, as such, able to be expanded to as many additional cameras or projectors as desired. The net result of the processing is that it is possible to provide each projector with a slightly different view of the subject.

The processed 3D images are projected by an array of projectors. The array consists of 216 LED-powered Qumi v3 projectors each in portrait orientation. The projectors are arranged in a semicircular array with a 3.4m radius. The array is horizontally oriented and located behind a flat screen. The ‘dense’ 0.625° spacing between projectors is reported as providing a large display depth of field with minimal aliasing.

The screen is manufactured by Luminit. It is described as being composed of a vertically-anisotropic light shaping diffuser material. The screen is designed to scatter the light while preserving the unique angular distribution of rays produced by each projector. More specifically, the screen scatters light in a narrow, 60º vertical stripe such that each pixel can be seen at multiple viewing heights while maintaining a narrow, 1º horizontal blur. In this way, the gaps between the projectors are smoothly filled in with adjacent pixels. Each point on the screen can display different colors in different horizontal directions corresponding to the different projectors behind the screen.

When viewing the final image, it is found that, as the viewer moves about in front of the screen, it is possible to see a varying 3D image. The pixels in these images are produced by only a subset of the projectors behind the screen. The perspective of the image observed for each position matches the view that would be seen if the real subject was actually standing in the corresponding position in front of the viewer.

A somewhat informative video on the automultiscopic display can be found at the end of this article.

A display having such properties would seem ideal for human subjects as it can be designed to allow for natural personal interactions with 3D cues such as eye-gaze and complex hand gestures. It follows that an automultiscopic display has a wide range of potential applications extending from video games to medical visualization. The goal of the Institute for Creative Technology is, however, to develop exhibits in which users can have a simulated interactive conversation with a realistic 3D image of another person.

– Arthur Berman

USC Institute for Creative Technologies, Andrew Jones, 310-574-5700, [email protected]