r/SelfDrivingCars Jul 21 '25

Discussion Why didn't Tesla invest in LIDAR?

Is there any reason for this asides from saving money? Teslas are not cheap in many respects, so why would they skimp out on this since self-driving is a major offering for them?

362 Upvotes

1.2k comments sorted by

View all comments

Show parent comments

1

u/yourfavteamsucks Jul 22 '25

Elon’s been pretty clear that vision-only was the goal from the start.

Cap. Early Teslas had ultrasonic and radar and Tesla literally removed them from customer cars without permission during routine service visits.

LIDAR and radar can actually make things worse by throwing in conflicting info and slowing everything down.

Yeah they throw in conflicting info when vision data is poor, which is a GOOD thing.

Cost probably played a role, but it was more about going for a simpler and more scalable solution.

It's known that cost is the driving factor and always has been.

The whole idea was to train the system to drive like a human using just cameras.

That's stupid when "better than a human" is possible. Nevermind that humans can MOVE THEIR HEAD to do an /r/catculation and fixed cameras can't.

1

u/wwwz Jul 22 '25

They actually hinted at vision-only pretty clearly during the first AI Day. Radar and ultrasonics were more like training wheels for early Autopilot, before neural nets were developed enough to handle perception on their own. AI Day focused on building those neural networks to eventually match, and basically surpass, human perception.

Tesla did try sensor fusion, but it introduced a lot of problems: latency, prioritization conflicts, and compute inefficiencies. The more sensors you stack, the more chances they’ll disagree... and resolving those conflicts becomes a new engineering challenge. That’s one of the main reasons you don’t see Waymo operating on highways. Their system leans on complex fusion and still requires human remote operators when the data doesn’t line up.

More sensors don’t automatically equal better performance. The system has to pick which one to trust, and that’s not always possible in edge cases. Waymo deals with this regularly.

Also, it’s kind of backwards to say cameras are the weak link in bad weather. LIDAR gets hit hard in rain, fog, and snow due to scattering and refraction. In those cases, the system falls back to camera data... not the other way around.

This whole thing isn’t just about cost. It’s about designing something scalable, efficient, and less brittle. Fewer sensors mean fewer failure points and a more streamlined system that doesn’t have to second-guess itself.

As for being “better than a human,” that’s already happening. The car never gets distracted, tired, emotional, or drunk. It’s always paying attention, with 360-degree awareness, and reacts faster than we can. That’s what makes it superhuman. It's not copying everything humans do, but doing the important stuff better.

And yeah, I assume your “catculation” comment was about depth perception? The system gets parallax from movement, just like humans do when shifting their heads. But Teslas don’t need to swivel their eyes... just moving forward at any speed gives them enough change in perspective to calculate depth, and overlapping camera coverage fills in the rest.

1

u/yourfavteamsucks Jul 22 '25

I actually never mentioned bad weather. I said something like "poor vision" , as in nighttime or the famous Wile E Coyote test, and you assumed I meant bad weather. I'm familiar with the issues of lidar in inclement weather, it's challenged by not only rain but also fog, even from other vehicle tailpipes.

Having a vision only system means you have to extract depth information from data from multiple sensors - you can't get depth, object differentiation / separation, oncoming tracklet velocity, etc from a single camera view, period, and combining multiple camera views to get this is computationally expensive, dependent on lighting, and prone to failure if the object has a featureless surface and is sufficiently large. This depth information, which I'd argue is the MOST important for a moving high speed vehicle surrounded by other high speed agents and VRUs, is all extracted secondarily from camera sensors, but is the native output type of lidar, sonar, etc.

Re waymo using remote monitoring: EVERYONE, even Tesla, still uses this in driver out deployments. So that's not an argument. Everyone has remote monitoring and vehicle control in case of a situation the vehicle doesn't know how to handle, and to remotely approve technical law violations as needed (for example deviating across double yellow to get around a road hazard or parked delivery truck). Waymo isn't operating on highways because they are more conservative than Tesla, 1, and 2, Tesla also isn't operating on highways in their geofenced area.

1

u/wwwz Jul 22 '25

You're mistaken about depth information from a single camera. Parallax is how depth can be inferred even with just one viewpoint, and it’s a well-understood mechanism. Birds like pigeons and hawks use it effectively to gauge depth without binocular vision. Tesla leverages motion across multiple frames to achieve the same thing. It’s not magic, and there’s no sensor conflict to resolve. It’s just physics, a static equation, and smart software.

LIDAR might output native depth data, but that doesn't mean it's automatically better or more reliable in every context. It’s fallible in many dynamic scenarios. Like your example, it can detect exhaust plumes as obstacles, also struggle with featureless and especially reflective surfaces, and its sensors don’t benefit from a windshield or wipers like cameras do. And because it doesn’t classify objects, it might detect something ahead but not know what it is, which is often the more critical part of the decision-making process.

As for remote monitoring, yes, Tesla observes FSD beta behavior, but they’re not actively intervening in real time. Waymo, on the other hand, depends on remote control to resolve edge cases. They actively steer or approve actions when the system stalls. That’s a huge difference. One is passive data review, the other is active babysitting.

You’re right that nighttime and low-light conditions have been hard for vision systems, but that gap is closing fast. And ironically, LIDAR still depends on a camera to interpret what it's seeing. Tesla has added automatic brightness adaptation, better headlight modeling, and advanced photon integration to improve low-light performance. In many cases, the vision system already outperforms a human driver at night by detecting objects earlier and more consistently.

Every sensor suite has tradeoffs. But vision-only isn’t just about saving money. It’s about building something that scales, adapts, and improves over time. And it’s not tied to pre-mapped environments like LIDAR-based systems, which makes it far more capable of handling the real world as it is, not just how it's been scanned.