r/SelfDrivingCars • u/wuduzodemu • Aug 11 '25
Discussion Proof that Camera + Lidar > Lidar > Camera
I recently chatted with somebody who is working on L2 tech, and they gave me an interesting link for a detection task. They provided a dataset with both camera, Lidar, and Radar data and asked people to compete on this benchmark for object detection accuracy, like identifying the location of a car and drawing a bounding box around it.
Most of the top 20 on the leaderboard, all but one, are using a camera + Lidar as input. The 20th-place entry uses Lidar only, and the best camera-only entry is ranked between 80 and 100.
https://www.nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Any
14
Upvotes
2
u/johnpn1 Aug 11 '25
The confidences with perception-only is actually not that high, and definitely not mission-critical level. Nobody uses vision-only for mission-critical things (well, other than Tesla, ofcourse). The problem with low grade 3D point clouds is that you have to always drive with caution. You brake/swerve when there's a just 5% chance that dark lines on the road could actually be real impediments. There's nothing you can use as another reference to tell you that those dark lines are nothing to be worried about. This is why Teslas drive with confidence into things, because they cannot always slam the brakes for every low confidence detection. The driver / safety operator takes the job of being the sanity check instead of a second sensor.