subscribe

HUDs Are Easy; Augmented Reality HUDs Are Not

The 2017 Vehicle Displays and Interfaces Symposium, held in Livonia, Michigan, on September 26 and 27, continued the growth, stimulating content, and enthusiasm demonstrated last year. Attendance this year was 550, compared to 400 last year and 300 the year before. Exhibit tables were up to 73 this year from 61 in 2016.

A major topic of the first day was head-up displays (HUDs) and augmented reality (AR/AugRel). While simple HUDs that display basic instrument-cluster-type information and turn-by-turn way-finding symbols are well known, have been deployed as orignal equipment in several vehicles, and are even available as after-market accessories, an AR HUD is not available in any commercially available vehicle.If HUDs are so easy, what makes AR HUDs so difficult?

In his keynote address, Thomas Seder, HMI (Human Machine Interface) Chief Technologist at GM Research and Development presented an augmented reality functional block diagram that broke the AR HUD flow into three distinct processes. The first is Sense, with the sensing being done by vehicle sensors such as RADAR, LIDAR, GPS, DGPS, visible cameras, NVIS cameras, and a head-eye tracker.

The second process is to register the sensed information with the artificially generated images with algorithms running on a dedicated AR processor with GPU. The processor aggregates sensor data, performs sensor fusion, and transforms the perspective of the sensors to that of the driver’s, thus enabling overlay and registration of AugRel images onto the real world. AugRel apps include highlighting of lane markings, targeting vehicles, and targeting pedestrians. And the system must include machine learning algorithms.

Car AR Image DxOIn the exhibits, LightSpeed showed this commercially available aftermarket HUD. The navigation information shown on the HUD requires no synchronization with the real world. That is, the display is “non-conformal” There is no augmented reality here. (Photo: Ken Werner)

The final process is displaying the AugRel images, which requires “a wide FOV (field of view) HUD with performance attributes that supports the AR illusion.” The AR HUD has to “paint the road” with the AugRel images, Seder said.

In fact, this last display piece is not easy, and none of the HUDs deployed today are capable of doing it. In the language of the field, today’s HUDs are “non-conformal”; that is, they don’t and can’t support AR. But, as significant as the display challenges are, they pale in comparison to the registration issues.

To restate in part some of the content in Seder’s flow chart, the AR system must establish the size, direction, apparent distance, lunminance, and contrast of virtual objects so that they appear to belong with the real objects in the visual field, and are not so bright as to be distracting or so dark as not to be useful. This “painting of the road” must be retained during changes in ambient light and changes in the driver’s posture or head position that changes his eye location relative to the HUD, to the windshield, and to the real objects outside the car.

All of this occurs within a broader context: what information and how much information should the system show the driver, how should it be presented, and when?

During a panel discussion, a questioner asked about vehicular dynamics and image latency. The questioner had run an experiment in which a conformal HUD system was vibrated at approximately 28 Hz. The system was “painting” AugRel lane markings over the real ones, and the vibration caused the AugRel imaging to lose its synchronization with the real marking.s

Panel member Dan Cashen of Daqri Automotive answered that maintaining sync under vibration or shock is very hard to do.

“Our systems aren’t fast enough to keep up.”

Linda Angell of Touchstone Evaluations suggested a work-around. Pilots use “tunnel in the sky graphics.” This kind of approach could require less strict synchronization than a “paint the road” model.

Cashen suggested using historical data from the vehicle. If the vehicle — and presumably its driver –had traversed a route many times before, perhaps we can assume the driver is so familiar with this road, there is no reason to supply him or her with lane-guidance visual aids. Or maybe we don’t need to show lane markings that are close to us. Maybe we show only those virtual lane markings that enhance real lane markings that are far enough way to be hard to see. What info does the driver need?

Said Joe Pullukat of NSI,

“You could have a whole conference just on HMI design for HUDs.”

Cashen noted that 20% of accidents are related to perception errors, and that AR could help there. Seder had said earlier,

“There are [nascent] systems that can tell the driver where he should look and where he should look next.”

Automotive AR is hard, and at the Vehicle Displays and Interfaces Symposium, it stimulated interesting conversations and the sharing of instructive data.

There is much more to be said about the Symposium, and we will say it in the forthcoming edition of Mobile Display Monitor. – Ken Werner

Ken Werner is Principal of Nutmeg Consultants, specializing in the display industry, manufacturing, technology, and applications, including mobile devices and television. He consults for attorneys, investment analysts, and companies re-positioning themselves within the display industry or using displays in their products. He is the 2017 recipient of the Society for Information Display’s Lewis and Beatrice Winner Award. You can reach him at [email protected].