SoulMete - Informative Stories from Heart. Read the informative collection of real stories about Lifestyle, Business, Technology, Fashion, and Health.

Why Tesla’s emphasis on camera data from AI might not be enough

[ad_1]

Tesla maintains that artificial intelligence, the heart of its self-driving technology, gathers all the data it needs from cameras. CEO Elon Musk says that cameras and radar sometimes present conflicting data. “When radar and vision disagree, which one do you believe?” he tweeted. “Vision has much more precision, so better to double down on vision than do sensor fusion.”

But many automakers believe a combination of sensors — typically cameras, radar and lidar (which Tesla has never used) — is essential for safe autonomous driving, because each sensor complements the others’ strengths and compensates for weaknesses.

Since human drivers rely primarily on vision, cameras make intuitive sense. Tesla’s approach utilizes a single, or monocular, camera to analyze a given scene. But to a monocular camera pointed in just one direction, the world is flat. The system must infer depth using various cues in the scene, such as the known size of a vehicle or human. Without true, measured depth perception, a 2D camera can’t distinguish a live scene from a scenic poster, for instance. Another failure mode for cameras is poor visibility, such as bad weather or unlit nighttime conditions — situations that do not degrade the performance of radar.

Radar offers the significant advantage of measured, precise 3D depth. It can perceive objects at long range and measure the physical distance between the car and objects in the road. Radar is effective in any light or weather conditions, including darkness, fog and snow. However, radar cannot clearly identify objects, so it must be used in conjunction with other high-fidelity sensors. Lidar uses lasers to produce more precise 3D imaging, but at short range, lower resolution than cameras and much higher cost. Further, lidar relies on scanning the scene with pulses of light, which takes time. Because of the inherent low resolution of a lidar, many scans must be completed to fully assess a scene, which can cause the lidar to miss critical objects that may be a danger to the vehicle.

Another limitation of Tesla’s approach is the computational load Tesla’s neural networks place on the on-board processors. An autonomous vehicle’s AI brain consists of complex computing systems that integrate sensors, computation, communication, storage, power management and full-stack software.

Autonomous vehicles process huge amounts of time-sensitive data vital to safety, such as lane markings, traffic flow, stop signs and lights. These systems command enough computing power to become “data centers on wheels.”

But even that kind of power has its limits. The computing power of a Tesla is dwarfed by that of the human brain. Because of compute limitations, Tesla must prioritize with which features to burden the vehicle’s processing. Many believe Tesla might do well to reassess its priorities, including the need to resolve known safety issues, before using its AI “brain” power for innovations such as an Assertive driving mode. Removing radar and prioritizing new features such as the Assertive mode are examples of trade-offs Tesla is making that could cost human lives.

[ad_2]
Source link