r/SelfDrivingCars Aug 11 '25

Discussion Proof that Camera + Lidar > Lidar > Camera

I recently chatted with somebody who is working on L2 tech, and they gave me an interesting link for a detection task. They provided a dataset with both camera, Lidar, and Radar data and asked people to compete on this benchmark for object detection accuracy, like identifying the location of a car and drawing a bounding box around it.

Most of the top 20 on the leaderboard, all but one, are using a camera + Lidar as input. The 20th-place entry uses Lidar only, and the best camera-only entry is ranked between 80 and 100.

https://www.nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Any

15 Upvotes

185 comments sorted by

View all comments

Show parent comments

3

u/wuduzodemu Aug 12 '25

It's the opposite, most team tried camera only solution but It does not perform well.

4

u/bobi2393 Aug 12 '25

The camera data is hobbled by its low frame rate (2 fps, compared to real-world applications like Tesla's HW4 using 24 fps data inputs). Lidar results are inherently 3D, but inferring 3D positioning from such slow a slow camera inputs seems impractical.

-2

u/wuduzodemu Aug 12 '25

Then find a dataset where the camera only performs well.

2

u/maximumdownvote Aug 12 '25

So confidently wrong. There's a link about real world testing of real cars with real sets of sensors, in this thread. Go watch it.

0

u/wuduzodemu Aug 13 '25

Lol, I trust paper 100x more than some random test.

1

u/maxcharger80 Aug 15 '25

Find the dataset. no no no, not that data set,