A complete navigation system for the visually impaired

Posted By : Nat Bowers
A complete navigation system for the visually impaired

MIT researchers have developed a low-power chip for processing 3D camera data that could help visually impaired people navigate their environments. The chip consumes only one-thousandth as much power as a conventional computer processor executing the same algorithms. Using their chip, the researchers also built a prototype of a complete navigation system for the visually impaired.

About the size of a binoculars case and similarly worn around the neck, the system uses an experimental 3D camera from Texas Instruments. The user carries a mechanical Braille interface developed at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), which conveys information about the distance to the nearest obstacle in the direction the user is moving.

“There was some prior work on this type of system, but the problem was that the systems were too bulky, because they require tons of different processing,” says Dongsuk Jeon, a postdoc at MIT’s Microsystems Research Laboratories (MTL) when the work was done who joined the faculty of Seoul National University in South Korea this year. “We wanted to miniaturise this system and realised that it is critical to make a very tiny chip that saves power but still provides enough computational power.”

Jeon is the first author on the new paper, and he’s joined by Anantha Chandrakasan, the Vannevar Bush Professor of electrical engineering and computer science; Daniela Rus, the Andrew and Erna Viterbi professor of electrical engineering and computer science; Priyanka Raina, a graduate student in electrical engineering and computer science; Nathan Ickes, a former research scientist at MTL who’s now at Apple Computer; and Hsueh-Cheng Wang, a postdoc at CSAIL when the work was done who will join the National Chiao Tung University in Taiwan as an assistant professor this month.

In work sponsored by the Andrea Bocelli Foundation, which was founded by the blind singer Andrea Bocelli, Rus’ group had developed an algorithm for converting 3D camera data into useful navigation aids. The output of any 3D camera can be converted into a 3D representation called a 'point cloud', which depicts the spatial locations of individual points on the surfaces of objects. The Rus group’s algorithm clustered points together to identify flat surfaces in the scene, then measured the unobstructed walking distance in multiple directions.

For the new paper, the researchers modified this algorithm, with power conservation in mind. The standard way to identify planes in point clouds, for instance, is to pick a point at random, then look at its immediate neighbours and determine whether any of them lie in the same plane. If one of them does, the algorithm looks at its neighbours, determining whether any of them lie in the same plane, and so on, gradually expanding the surface.

This is computationally efficient, but it requires frequent requests to a chip’s main memory bank. Because the algorithm doesn’t know in advance which direction it will move through the point cloud, it can’t reliably preload the data it will need into its small working-memory bank.

Fetching data from main memory, however, is the biggest energy drain in today’s chips, so the MIT researchers modified the standard algorithm. Their algorithm always begins in the upper left-hand corner of the point cloud and scans along the top row, comparing each point only to the neighbour on its left. Then it starts at the leftmost point in the next row down, comparing each point only to the neighbour on its left and to the one directly above it, and repeats this process until it has examined all the points. This enables the chip to load as many rows as will fit into its working memory, without having to go back to main memory.

This and similar tricks drastically reduced the chip’s power consumption. But the data-processing chip isn’t the component of the navigation system that consumes the most energy; the 3D camera is. So the chip also includes a circuit that quickly and coarsely compares each new frame of data captured by the camera with the one that immediately preceded it. If little changes over successive frames, that’s a good indication that the user is still; the chip sends a signal to the camera, which can lower its frame rate, saving power.

Although the prototype navigation system is less obtrusive than its predecessors, it should be possible to miniaturise it even further. Currently, one of its biggest components is a heat dissipation device atop a second chip that converts the camera’s output into a point cloud. Adding the conversion algorithm to the data-processing chip should have a negligible effect on its power consumption but would significantly reduce the size of the system’s electronics.

In addition to the Andrea Bocelli Foundation, the work was co-sponsored by Texas Instruments, and the prototype chips were manufactured through the Taiwan Semiconductor Manufacturing Company’s University Shuttle Program.

Story and image via MIT.


You must be logged in to comment

Write a comment

No comments




Sign up to view our publications

Sign up

Sign up to view our downloads

Sign up

Sensor+Test 2019
25th June 2019
Germany Nürnberg Messe
DSEI 2019
10th September 2019
United Kingdom EXCEL, London
European Microwave Week 2019
29th September 2019
France Porte De Versailles Paris
Engineering Design Show 2019
16th October 2019
United Kingdom Ricoh Arena, Coventry
ELIV 2019
16th October 2019
Germany Bonn World Conference Center