3D sensing everywhere: all-silicon lidar goes tiny

Partner content

Robots, in general, need vision. And all sorts of robots, from industrial equipment to advanced driver assistance systems (ADAS) have relied heavily on high-resolution cameras to sense the world around them. But as we demand more and more of our inventions—industrial machines that operate without humans, cellphone apps that deliver consuming virtual reality, or fully autonomous vehicles in place of ADAS—the algorithms that control these devices need the ability to sense their surroundings in the third dimension.

One solution—arguably the best for high performance and reliability—depends on light detection and ranging (lidar). Lidar scans its field of view with a laser beam, capturing the reflected light in a special sensor chip that also records how long the laser beam took to fly from sensor to scene and back to sensor—thus measuring distance. The result is in effect a 3D image of the field of view.

Smaller systems

Great for geographic mapping or for self-driving taxis. But small systems such as warehouse robots or delivery drones have just such needs, over shorter ranges. They must maintain an internal map of their surroundings in order to navigate and avoid collisions. And stationary industrial equipment needs to monitor a fixed danger zone for potential intrusion by a distracted pedestrian or an incautious hand. Back on the highway, advanced driver assist systems (ADAS) can also benefit from a 3D safety cocoon around the vehicle instead of multiple 2D images (Figure 1).

A picture containing icon

Description automatically generated

Figure 1: ADAS systems may deploy multiple Lidar sensors around

a vehicle to create a 3D safety cocoon.

 

At the extreme end of small size, low-power, and low-cost, smartphones need similar mapping techniques to support augmented or virtual reality apps. A virtual reality game needs a map of its surroundings so engaged players don’t walk into furniture, or worse. In augmented reality applications, the app may actually overlay virtual objects over real world objects (Figure 2.) In each of these cases, lidar can help identify and locate objects.

 

A few men in a lab

Description automatically generated with low confidence

Figure 2: Lidar-equipped smartphones are beginning to support augmented

or virtual reality apps.

 

But there is a problem. Large-scale automotive lidar sensors require a long effective range—up to hundreds of meters. They use powerful lasers, and they direct the light beam with moving mirrors. This keeps the beam as concentrated as possible, so the sensor can be accurate at long range and in bright ambient light.

But powerful lasers require significant amounts of power. And moving-mirror assemblies are very difficult to scale to small size, low power, and low cost. These scaling limitations have kept lidar from contributing as much as it could to small systems.

Scaling down

In order to serve these small-scale applications, it is essential to reduce the size, power, and cost of the lidar. It helps greatly that all of these applications require only moderate to short ranges. This allows the use of lower-powered lasers. But the moving-mirror assembly remains a roadblock to scaling.

It is possible to eliminate the moving-mirror subsystem. The easiest approach is flash illumination. Like a camera flash unit, the flash lidar illuminates the entire field of view with a single pulse. A lens focuses an image of the scene on a 2D time-of-flight (ToF) sensor, and each pixel in the sensor records the phase difference between the flash and the arrival of reflections.  From these phase differences, the system can determine the distance from the lidar unit to each point imaged in the field of view. 

The system is mechanically simple, but it has serious challenges. The energy of the flash is spread across the whole field of view, so the reflected pulse at any one point has little energy and is easily lost in the ambient light. In high ambient-light conditions it takes a heroic flash to give the sensor anything at all to detect. Also, multipath and interference from other light sources can cause incorrect range readings.

Modified flash

Cleverness can help with these issues. For example, Apple engineers refined the flash approach for the iPhone 12 Pro by replacing the flood of light with a 24 x 24 array of discrete beams spread evenly across the field of view. Hence more energy strikes each illuminated point in the scene, improving the ambient-light rejection so much that the system can function out of doors. But with this architecture, the number of beams—576—is the limit on the resolution of the lidar. This limitation can require additional computation to interpret sensor data and can reduce the accuracy of object detection and recognition.

Solid-state beam steering

There is another way.  A remarkable application of physical optics using conventional liquid-crystal technology allows an escape from this dilemma. The trick is a solid-state device—a silicon chip—that provides motionless beam steering.

The heart of this architecture is that chip—what Lumotive calls a liquid-crystal metasurface (LCM) (Figure 3). If you shine a laser beam on the chip’s surface, it can control the angle of reflection of the beam by purely electronic means, with no moving parts whatsoever.

A picture containing light, cosmetic, colorful

Description automatically generated

     Figure 3: The Lumotive LCM is a silicon chip that uses special properties of metasurfaces to steer reflected laser beams

 

In the LCM-based lidar module (Figure 4) a row of VCSELs illuminates the surface of the LCM. The LCM then sweeps the reflected row of light across the field of view. A lens focuses the scene on a 2D ToF sensor, which detects the laser illumination returning from the scene. After each sweep—requiring about 50ms—the ToF sensor provides a 640 x 480 array of range data.

Graphical user interface

Description automatically generated

Figure 4: Lasers, an LCM chip, a sensor chip, and optics form

complete solid-state lidar

This approach has significant advantages. The LCM lidar can reduce laser power to about 10% of what a flash approach would require and still offer two to three times the effective range in similar ambient light conditions. Resolution is limited by the ToF sensor, not the illumination. And because only a stripe across the field of view is illuminated at any one time, multipath and interference are significantly reduced as well.

Another key advantage is that the light rowmoves under software control, so it is not limited to repetitive scanning of the whole field. And all of the technology required, including for the LCM, is already present in smartphone supply chains.

Compact, low-power, accurate, and manufacturable, LCM-based lidar will open new worlds of compelling applications.

 Lumotive CTO and Co-founder, Gleb Akselrod, has more than 10 years experience in photonics and optoelectronics. Prior to Lumotive he was the Director for Optical Technologies at Intellectual Ventures in Bellevue, WA, where he led a program on the commercialization of optical metamaterial and nanophotonic technologies. Before that, he was a postdoctoral fellow in the Center for Metamaterials and Integrated Plasmonics at Duke University, where his work focused on plasmonic nanoantennas and metasurfaces. He completed his Ph.D. in 2013 at MIT, where he was a Hertz Foundation Fellow and an NSF Graduate Fellow. For his dissertation, he studied the transport and coherence of excitons in materials used in solar cells and OLEDs. Prior to MIT, Gleb was at Landauer, Inc. where he developed and patented a fluorescent radiation sensor that is currently deployed by the U.S. Army. He holds more than 10 US Patents and has published over 25 scientific articles. Gleb received his B.S. in Engineering Physics from the University of Illinois at Urbana-Champaign.

Join fellow engineers at Sensor Converge, Sept. 21-23, in person or online. Register here  Gleb is speaking as part of the Autonomous Technologies pre-conference on Sept. 21.