How Does A Computer Know Where You’re Looking?

How much information is too much? And where should it go? Heads-up display image from shutterstock.com

Danielle Andrew 02 Sep 2016, 19:36

The ConversationImagine driving a car, using a heads-up display projection on the windshield to navigate through an unfamiliar city. This is augmented reality (AR); the information is used to not only guide you along a route, but also to alert you to salient information in your surroundings, such as cyclists or pedestrians. The correct placement of virtual content is not only crucial, but perhaps a matter of life and death.

Information can’t obscure other material, and should be displayed long enough for you to understand it, but not too much longer than that. Computer systems have to make these determinations in real-time, without causing any of the information to be distracting or obtrusive. We certainly don’t want a warning about a cyclist about to cross in front of the car to obscure the cyclist herself!

As a researcher in AR, I spend a lot of time trying to figure out how to get the right information onto a user’s screen, in just the right place, at just the right moment. I’ve learned that showing too much information can confuse the user, but not showing enough can render an application useless. We have to find the sweet spot in between.

A crucial element of this, it turns out, is knowing where users are looking. Only then can we deliver the information they want in a location where they can process it. Our research involves measuring where the user is looking in the real scene, as a way to help decide where to place virtual content. With AR poised to infiltrate many areas of our lives – from driving to work to recreation – we’ll need to solve this problem before we can rely on AR to provide support for serious or critical actions.

Determining where to put information

It makes sense to have information appear where the user is looking. When navigating, a user could look at a building, street or other real object to reveal the associated virtual information; the system would know to hide all other displays to avoid cluttering the visible scene.

But how do we know what someone is looking at? It turns out that the nuances of human vision allow us to examine at a person’s eyes and calculate where they are looking. By pairing those data with cameras showing a person’s field of view, we can determine what the person is seeing and what he or she is looking at.

Eye-tracking systems first emerged in the 1900s. Originally they were mostly used to study reading patterns; some could be very intrusive for the reader. More recently, real-time eye-tracking has emerged and become more affordable, easier to operate and smaller.

Eye-tracking spectacles can be relatively compact. Anatolich1, CC BY-SA

Eye trackers can be attached to the screen or integrated into wearable glasses or head-mounted displays. Eyes are tracked using a combination of the cameras, projections and computer vision algorithms to calculate the position of the eye and the gaze point on a monitor.

Full Article
Comments

If you liked this story, you'll love these

This website uses cookies

This website uses cookies to improve user experience. By continuing to use our website you consent to all cookies in accordance with our cookie policy.