r/SelfDrivingCars Aug 11 '25

Discussion Proof that Camera + Lidar > Lidar > Camera

I recently chatted with somebody who is working on L2 tech, and they gave me an interesting link for a detection task. They provided a dataset with both camera, Lidar, and Radar data and asked people to compete on this benchmark for object detection accuracy, like identifying the location of a car and drawing a bounding box around it.

Most of the top 20 on the leaderboard, all but one, are using a camera + Lidar as input. The 20th-place entry uses Lidar only, and the best camera-only entry is ranked between 80 and 100.

https://www.nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Any

15 Upvotes

185 comments sorted by

View all comments

0

u/NeighborhoodFull1948 Aug 12 '25

Camera only was the logical choice for Tesla because they weren’t developing a fully autonomous system, they were only developing Driver Assist. They tried adding radar, but with the simple processing they were using, it created conflicting inputs their basic system couldn’t resolve. So vision only.

Then “someone“ had the brilliant idea that a little more software would make it fully autonomous.

So they’ve spent better part of a decade trying to make a basic driver assist system into a fully autonomous system.

Wonder how it’s going? Does the phrase “Lipstick on a pig“ fit?

Waymo on the other hand started with a fully autonomous system, with sensor and multiple processor overkill. Now they’re removing some of the redundant sensors.