Why LiDAR might be the future of self-driving cars

Tesla has put a big bet on computer vision against LiDAR. The recent advances and low prices might help LiDAR challenge the industry leader.

Since the early 2010s, the technologies for automated vehicles have advanced at rapid rates and so has their popularity. Almost everyone agrees the future of automobiles is self-driving cars. They can make life easier for everyone and reduce the number of road accidents due to human errors.

The question now becomes, what technology self-driving cars will use? Elon Musk, the poster boy of self-driving cars, disagrees with the rest of the industry on this.

Self-driving is all about the sensors

The self-driving cars of today are actually not fully self-driving. The Society of Automotive Engineers has created a scale for automated vehicles that ranges from Level 0 to 5. Here’s how it works.

Level 0: Zero automation. The driver controls the car.

Level 1: Basic driver assistance. E.g., Cruise control on highways which allows drivers to take their feet off the pedals.

Level 2: Limited automation. Systems control acceleration and braking as well as steering to perform complex operations.

Level 3: Situational automation. Systems handle driving in conditions like highways and warn drivers when they must take over.

Level 4: High-level automation. All routine tasks in conditions within the operational design domain are performed by systems. The driver is asked to take over when unknown conditions are encountered.

Level 5: Full automation. No need for a driver. Systems can handle all operations in all kinds of conditions.

While Level 5 vehicles might take longer, Level 2 are already on the roads using ADAS (Advanced Driver-Assistance System) features. ADAS uses a set of sensors and a computer to sense and analyse the environment to make decisions based on the proximity of objects. These systems rely on sensors for the crucial data through which all decisions are made. Better sensors mean better information, and better information means greater automation.

How computer vision and LiDAR work

Self-driving car manufacturers have a few options when it comes to choosing the sensors for ADAS. Radars with different ranges, ultrasound systems, optical cameras and LiDAR are some of the most popular technologies for self-driving cars. Out of these, optical cameras and radars are often used together while LiDAR is used as a standalone system.

Tesla has been firm in its belief that computer vision is the future of self-driving cars. As a result, it is the only player that uses optical cameras as sensors and even dropped radars. The images from multiple cameras are analysed through a neural network which utilises the vast amount of data from all the Teslas to make decisions about acceleration, braking and steering.

On the other hand, LiDAR (Light Detection and Ranging) systems are used by everyone else including Waymo, Uber, Yandex and Toyota. LiDAR uses pulsating lasers to create a 3D map of the surroundings with high accuracy. The movement can be detected using an FMCW radar or by sending two quick pulses to see change in position over milliseconds. A computer uses this data to operate the car on the road.

Tesla’s logic

Tesla has multiple reasons to be at odds with the industry on sensors. The biggest advantage of optical cameras is their extremely low cost. Multiple cameras mounted on a car can provide large amounts of data for the neural network to work on. The process of creating these neural networks is very complex, but Tesla has invested in them and each car improves the current neural networks further.

Elon Musk, CEO of Tesla, has even called LiDAR “a fool’s errand” and “a crutch” over time. The opposition to LiDAR believes humans only use vision to drive and self-driving cars shouldn’t need more than that. They list LiDAR’s shortcomings to back this argument. LiDARs are very expensive compared to optical cameras. They are good at mapping the surroundings but can’t distinguish between different objects. They don’t have the advantage of seeing in colour that optical cameras have.

The case for LiDAR

LiDAR has come a long way over the years. Some shortcomings exist but many have been or are being solved. The cost has become a fraction of earlier costs, and companies are working to bring it down to $250. Recently, movement detection became possible with FMCW radars or through rapid-fire pulses. The system can be integrated with traffic light systems to overcome the colour detection issue when the cars hit the road.

Apart from solving the issues, LiDAR developers make a great case on safety. In 2016, a Tesla Model S was involved in a fatal crash. Since then, the authorities are looking into Tesla crashes with stopped emergency vehicles. LiDAR developers believe their systems wouldn’t miss such objects and could avoid crashes or make them less dangerous. LiDAR’s ability to accurately map surroundings in 3D gives it an advantage over 2D camera images where computer has to rely on limited visual information. At the end of the day, everyone wants a safer car even if the cost is a bit higher.

It remains to be seen whether Tesla’s belief in computer vision is rightly placed or LiDAR proves them wrong. Tesla has the advantage of having cars on the road, but LiDAR is also improving rapidly. LiDAR developers believe their systems are the best way towards Level 5 automated driving. Some believe a system with LiDAR and computer vision might come out as the winner. Even if that is the case, it will be a win for LiDAR developers who spent years improving the system and bringing the costs down.


About the author

Iestyn Cowan, Technical Engineer, EKSMA OpticsIestyn Cowan is the Technical Engineer at EKSMA Optics. He has over a decade of experience working in the field of LASER components manufacturing.