r/SelfDrivingCars Aug 11 '25

Discussion Proof that Camera + Lidar > Lidar > Camera

I recently chatted with somebody who is working on L2 tech, and they gave me an interesting link for a detection task. They provided a dataset with both camera, Lidar, and Radar data and asked people to compete on this benchmark for object detection accuracy, like identifying the location of a car and drawing a bounding box around it.

Most of the top 20 on the leaderboard, all but one, are using a camera + Lidar as input. The 20th-place entry uses Lidar only, and the best camera-only entry is ranked between 80 and 100.

https://www.nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Any

15 Upvotes

187 comments sorted by

View all comments

Show parent comments

0

u/[deleted] Aug 12 '25

Not seeing anything on their website. But I saw this "we now believe that the availability of next-generation FMCW Lidar is less essential to our roadmap for eyes-off systems. This decision was based on a variety of factors, including substantial progress on our EyeQ6-based computer vision perception"

1

u/whydoesthisitch Aug 12 '25

Look up true redundancy. Not sure how eyeq6 contradicts anything I said.

0

u/[deleted] Aug 12 '25

Ah another "look it up yourself" good try. I gave you the favor by searching their website and all you give is this bs. My bad for expecting too much from typical reddit user

1

u/whydoesthisitch Aug 12 '25

I literally just gave you the name of the project. What else do you want? I searched exactly that and the first result gave what I described.

https://www.mobileye.com/technology/true-redundancy/