Why centralized sensor fusion is the future of autonomous vehicles
With high-level sensor fusion, edge processing limits each smart sensor’s size, power and resource distribution, constraining overall AV performance. Additionally, high-volume data processing can quickly exhaust the vehicle’s power and reduce its range.
Algorithm-first central processing architecture, on the other hand, enables what we call deep, centralized sensor fusion. Taking advantage of the most advanced semiconductor technology nodes, this tech optimizes AV performance by dynamically distributing processing capabilities across all sensors, enabling increased performance across different sensors and directions depending on the driving scenarios. With access to high-quality, low-level raw data, central processors can make more intelligent — and more accurate — driving decisions.
AV manufacturers can use low-power radar and camera sensors along with bleeding-edge, algorithm-first, application-specific processors. The result: optimal perception and path-planning performance with the highest power efficiency envelope, which significantly increases each AV’s range while lowering battery costs.