Why Does Tesla CEO Elon Musk Disagree With Experts About Lidar?


MAR 4 2018 BY EVANNEX 91


Tesla’s Autopilot in action (Instagram: themaverique)


Just as animals use several different senses to perceive the world around them, cars and other vehicles rely on various sensor technologies, including radar, cameras combined with image recognition, and a newer technology called lidar, which calculates distances using pulses of infrared laser light. (RAdar [Radio Detection and Ranging] and lidar [LIght Detection and Ranging] are both acronyms, but the punctuation pundits have decided that radar, which came into widespread use during World War II, has been around long enough to become an ordinary word, and so no longer needs to be capitalized. For the sake of consistency, we’ll give lidar the same honor.)

Delayed By Lack Of Lidar? Tesla’s Autonomous Coast-To-Coast Trip Still Months Away

When it comes to self-driving vehicles, most folks in the industry seem to agree that a combination of sensors, working together with GPS-based mapping data, is the way to go. However, not everyone agrees on which sensor technologies are the best ones for the job. Perhaps unsurprisingly, Elon Musk is an outlier, having said several times that he doesn’t think lidar is necessary.

*This article comes to us courtesy of EVANNEX (which also makes aftermarket Tesla accessories). Authored by Charles Morris.

As a recent article in Futurism explains, several of the leaders in vehicle autonomy rely heavily on lidar (Waymo and Uber are currently embroiled in a legal battle over lidar trade secrets). However, Tesla’s Autopilot system is based on cameras that use optical recognition.


Cameras are discreetly hidden on the exterior accents of a Tesla (Image: Tesla)

“Once you solve cameras for vision, autonomy is solved; if you don’t solve vision, it’s not solved,” Musk said during a TED Talk in April 2017 (via Electrek). “You can absolutely be superhuman with just cameras.”

Just when Tesla seemed to be on the verge of a major breakthrough in autonomous driving, the company’s 2016 split with camera supplier Mobileye threw a wrench into the works. Tesla developed Autopilot 2.0 as a replacement for Mobileye’s computer vision technology, but by most accounts, it isn’t yet as capable as the previous Autopilot version. Tesla is working to bring Autopilot 2.0 up to speed as quickly as possible, but the slow pace of development has become a bone of contention between the company and owners who paid for an Enhanced Autopilot system that they say has yet to be truly enhanced.


Understanding Tesla’s Autopilot (Image: How it works)

Can Tesla deliver full self-driving capability without lidar? Experts in the field disagree. During Tesla’s recent quarterly earnings call, Musk discussed some of the technical issues involved, as reported by TechCrunch.

“We have to solve passive optical image recognition extremely well in order to be able to drive in any environment and in any conditions,” Musk said. “At the point where you’ve solved it really well, what is the point in having active optical, which means lidar? In my view, it’s a crutch… that will drive companies towards a hard corner that’s hard to get out of.”

Above: A look at how Tesla’s Autopilot works (Youtube: Wired)

The difference between radar and lidar has to do with the wavelength of the energy involved (radio waves vs light waves). Musk said that a system using the radar range would be better, because it could see through small obstructions. “[I find it] quite puzzling that companies would choose to do active photon generation in the wrong wavelength,” he said, adding that working with the laser spectrum is “expensive, ugly and unnecessary.”

Musk is convinced that the lidar-less road is the right one to take, but he did crack the door open to the possibility that he is mistaken. “Perhaps I’m wrong, in which case I’ll look like a fool, but I’m quite certain that I’m not.”


Written by: Charles Morris

*Editor’s Note: EVANNEX, which also sells aftermarket gear for Teslas, has kindly allowed us to share some of its content with our readers. Our thanks go out to EVANNEX, Check out the site here.

Categories: Tesla

Tags: ,

Leave a Reply

91 Comments on "Why Does Tesla CEO Elon Musk Disagree With Experts About Lidar?"

newest oldest most voted

Also, if you solve the optical orientation issue by pure AI power the road to Tesla robots of all trades is open.

Lidar is expensive now, but eventually price will go down.

Current Lidar sensors are also bulky and don’t integrate well into a smooth shaped vehicle, but that will probably also change.

I think the major other weakness of Lidar is it adversely affected by rain/snow conditions so not really all weather, whereas Radar is all weather.

And another issue of all these sensors is computer costs and power consumption which is very large on the Waymo and other franken-autonomous vehicles so perhaps here optical has a clear advantage.

Personally I think competition is good and may the best tech, or more likely a hybrid of various methods, win.

The necessity for Lidar to get details of your surrounding is a myth. See example like ORB-SLAM2 camera software when my post below shows an example of.

The new audi A8 And the upcoming A7 and A6 will have forward facing lidar for level 3 autonomous driving. So these sensors cannot be that expensive anymore…

New solid state lidar sensors are, from reports, much cheaper than the older ones, by well over one order of magnitude. One company is actually claiming a 99% drop in price! Even if that’s an exaggeration, I think it’s clear that price is no longer a serious obstacle to using lidar scanners in cars.

Exactly – nor are the sensors as ugly as they used to be. In fact, you can’t even see them anymore.

“I think the major other weakness of Lidar is it adversely affected by rain/snow conditions so not really all weather, whereas Radar is all weather.”

All forms of EM detection are affected by rain and snow. Radar is less effected than either cameras or lidar, because it uses long wavelengths. But cameras use the same wavelengths as lidar, so claiming that cameras are “better” while denigrating lidar for wavelength issues is at best self-contradictory, and at worst exposes the whole argument as intentional bull pucky.

Anyway, the cheap radar units Tesla and other auto makers are using on their cars, low-resolution Doppler radar scanners, are utterly inadequate for building up a SLAM “picture” of the environment.

For that, you’d need high resolution radar scanners, which still don’t give as good a resolution as lidar, but at least would be far better than trying to use cameras for everything. Both lidar and radar scanners are active scanners, which are what self-driving cars need. Passive scanning with cameras just ain’t gonna cut it, which is why Waymo, Lyft, and other companies are using lidar.

Your image is worth a thousand words! Wavelength matters.

You don’t need millions of points only thousands. It is not a human art picture we need but drive-able points. Your human side is misleading you vs technical aspects. Did you see the open source example ORB-SLAM2 that comma.ai and others use. Example video: http://www.youtube.com/watch?v=sr9H3ZsZCzc

That appears to be good enough for navigation, at least in the environments chosen… which may or may not be representative; there’s no way to tell just by looking at the video if it’s a cherry-picked environment.

It appears to me, though, that the SLAM shown isn’t sufficiently detailed to reliably spot smaller obstacles such as children and dogs in the road. There was not a single pedestrian in the entire video. It appeared to me that the smallest thing highlighted by the software was a bicycle rider.

Your argument is really weak.

A regular camera image gets affected by weather, but it’s still usable. That’s why humans are able to make do. LIDAR usually gives you a nearly useless image in bad weather, so is there really a point in using it?

LIDAR makes the best case easier and more robust, but does almost nothing for the bad cases. That’s what he means by getting stuck in a local minimum.

Even if he is wrong, it makes sense for at least one major company to take a LIDAR-less approach as opposed to everyone going down the same path.

Trying to depend primarily on camera images also fails to do something about the “bad cases” such as driving in the dark. That’s a much more commonplace event than driving in heavy rain or snow.

You are also presenting this as a binary, either/or situation. All three systems — visual sight, radar, and lidar — are degraded in heavy rain and snow. It’s true that lidar is degraded to a worse extent than radar, but — as I already said — that’s an argument in favor of high resolution radar, not an argument in favor of trying to use cameras for everything.

Tesla, so far as I can see, is trying to use cameras and low-resolution Doppler radar for everything. That just ain’t gonna work. If they were upgrading to high-resolution radar, then Elon would have a valid argument. But they are not, and he doesn’t.

Low-resolution radar gives a very fuzzy picture, as illustrated here:

I think one point that is missed here in the conversation is that human capability is not a guidepost for autonomous driving. Therefore the passive human visual system does not proof that a camera is sufficient.

Humans’s make mistakes that lead to numerous injuries, disabilities, and deaths. People will not accept an autonomous system that is as dangerous as a human driver.

So you better make sure that you get the state of the art sensors and processing power together to show clear superiority over a human driver. Anything less is not going to be acceptable for an activity where the consequences could be dire.

And you’re ignoring the vast difference in degradation between the different sensors.

Tesla is using radar point clouds. Are you reducing your argument now to high resolution radar?

Sorry, but I don’t think anyone on the planet has the image processing advancement foresight to draw the line at a particular level of radar resolution.

And I don’t see how you know what resolution Tesla’s radar is capable of, or what resolution they be willing to upgrade cars to for those that purchased self drive.

Not sure of this is something to worry about, but what if, in the future, we have a 6 lanes wide road cramped full with AV’s all simultaneously emitting Lidar and Radar signals. Wouldn’t that degrade these systems? A camera array does not emit anything. My logic says that AV’s should have and be able to rely on all three systems and combine their data to improve overall performance. Once these systems are fully evolved and mass-produced the price won’t be such an issue anymore. Furthermore a messaging system between vehicles would extend the range even more, this would make chain collisions disappear into the history books.

A problem with vehicle-to-vehicle messaging is that it raises massive security problems. Once you have large numbers of systems talking to each other on the fly, the possibilities for hacking in with false information rises exponentially. Malicious hackers could cause accidents or bring traffic to a halt. Secure autonomy will require isolating all vehicles systems.

If we can send bank transactions over the internet, this should also be possible. The messaging system can be secured to a certain level and sandboxed.
– Those messages should be signed and verified to begin with.
– Warning messages should be weighted by from how many AV’s they are reported.
– Warning messages should not trigger any direct actions like hitting the brake. They are warnings to alert the software of a potential thread and it’s possible location.

That IS a great comparison photo. However, pictures are relatively meaningless to radar and Lidar use. Radar and lidar primarily use the Z axis, or distance to the object, which pictures do not show. This comparision would be like taking a black and white photo and asking how well it differentiates a red car from white one.

There is good reason aircraft controllers don’t look at a “picture” of radar out the window.

Radar doesn’t work in heavy rain either. I’ve had my autopilot a shut off mid use due to radar vision being obstructed by rain

Because there is software that proves that cameras can defined 1000s of Earth-Centered Earth-Fixed (ECEF) Cartesian coordinate points. You can use these for localization from HD maps or real-time. One example is the open source software called ORB-SLAM2 that comma.ai and others use. Example video: http://www.youtube.com/watch?v=sr9H3ZsZCzc

Anyone can easily convince themselves that today’s camera technology is not ready for autonomous cars to rely on for object detection and recognition by performing this experiment: 1) Go get the video camera with the best DR(dynamic range = range of contrast the camera can handle) you can find. Right now that is probably something like a $3500 Canon 5D MK IV digital SLR since it can supposedly do HDR on video. HDR is combining multiple image captures taken at different exposures to improve overall DR. Most camera phones can do this *for stills* 2) Have someone drive you around in very high contrast scenarios (sun + shady spots, sun + snow, night time) while you record video 3) Play the video back. What you will see is there is a lot of data missing because the dynamic range of the camera didn’t allow it to be captured. No data = no object recognition. The best video DR right now is probably at best 16 stops and that is from a digital SLR with a big sensor which requires big lenses. Judging by the lens aperture sizes Tesla is using their sensors are probably closer to camera phone size which… Read more »

Or simply add in a sensor in a different spectrum, such as radar


Interesting thx.


When a good camera is doing HDR—ei multiple exposures– are they changing aperature or the digital equivalent of film speed, or both??


They are changing gain AKA ISO which is analogous to film speed. A camera phone can do this for still. Doing it for motion requires lots of processing power.

Very simply, two video cameras with 12 stops each, one set for very bright scenes and one for shadows and night time, will give you the 20 stop DR of the human eye with a small overlap.

Another thing to consider is that human eye is the best detection device right now. But if the windshield wiper isn’t working (rain snow dirt spill)the human detection use of no use.

I don’t see any cleaning device for these cameras.

Also just like how mischievous hackers would cause problems.

A person can easily type over cameras to create trouble.

There would be no driver to removed it.

That is the most convincing argument I’ve heard so far. But what about a single eye blind guy? wouldn’t he have half as much of that ‘capacity to see’ you talk about? And why do you think this a factor to driving autonomy? Honestly I do agree with Musk, but neither me (or, I risk to say, him) disagrees with most of what is said in here. But I highly challenge the assumption that cameras cannot handle driving autonomy. That is simply not proven. What is proven is that full autonomy hasn’t been achieved despite the enormous array of sensors of all kinds available to men., yet no software suite was able to this day to handle full autonomy, no matter what sensors it is equipped with. So, if, software is the missing link in full autonomy, why spend time messing with new system sensors which we still haven’t tackled yet, have no indication of actual being proven useful. Why not spend developers time coding software, which is something we’ve fairly mastered, and clearly missing in the full autonomy drive. Seems like a no brainier really. Is not to say Lidar is not an improvement over cameras or otherwise really,… Read more »
Trying to develop a reliable self-driving system without first settling on what type of sensors the car will be using, would be like trying to build a house before putting in the foundation. Whatever sensors (or more likely, suite of sensors) will be used, the car will require a detailed, 3D SLAM virtual reality model of the environment around the car, complete with detection and accurate positioning of all obstacles of any real size, both stationary and moving. Obviously not every object can be detected, but then dime-sized objects don’t need to be detected. We can hope, though, that self-driving cars will be able to reliably detect objects such as a medium-sized dog, or perhaps down to the size of an adult cat. Trying to design software to move a car safely within the environment of the SLAM is pretty pointless if you don’t even know what kind of sensors will be used to build up the SLAM. Speaking as a programmer, I see a lot of very silly questions and arguments about self-driving cars. Ethical questions like “In case of impending accident, should the car drive itself into a solid brick wall rather than into a school bus full… Read more »

But, didn’t Elon say the New Roadster could ‘Fly?’ (Short Hops, at least!)

And the thing with the School Bus, School Zones, or Highways, Speed, or the Irreverent, or Illegal Applications of Speed, are the first element to manage!

Common software data set: ‘This Road, from here, to there, will frequently have Buses with Children at various points, stopped, or stopping suddenly; Maintain Distance, Don’t drive faster that you can stop in 1/2 your sight distance, Don’t drive faster than School Zone Speed near Schools, give an extra 50% of usual stopping distance, until Bus is clear! Be Aware, there are Brick Walls at 540 this street, this town!’

Such a description is way too simple, but presents my point, for basically “1 Line of Code” for Autopilot Software for the AI Engine to decipher, analyse, and totally manage, while also managing a few thousand other “Lines of AP Code!”

Because he is pig headed and doesn’t want to have to retrofit all the cars that were purchased with FSD with lidar units??

Seems like a reasonable answer/excuse.

Musk is not an expert on the topic.

Maybe Musk is not alone in Tesla R&D department…

Furthermore no animal on earth has lidars and they all find their way, even birds moving in a 3D environment at a quite higher speed than humans, even insects with quite a small brain. So it may not be easy, but it may be possible and quite inexpensive in hardware !

The goal of self-driving technology isn’t to reproduce the ability of animals (including humans) to see. The goal is to produce robot drivers which are considerably safer than human drivers.

The first goal of those designing a truly self-driving car should be to build up in the car’s computer a SLAM virtual reality simulation of the car’s environment, so it can reliably detect obstacles (both moving and stationary). Using lidar to do that is far better, far faster, and far more reliable than trying to use cameras.

Again, trying to use cameras for that is like trying to use a hammer when you need a screwdriver.

I wouldn’t go that far. But overall I don’t think going the camera approach is bad regardless of it is right or wrong. Because at end of the day I don’t think any “single” company will accomplish fully self driving.

I think self driving will come when companies decide to pool their technologies together. So Tesla will have the best Camera based tech, Google might have the best Lidar based tech. And when all that technology is pulled together, you will get fully level 5 self driving cars.

Because self driving is not simply about having 100% accuracy, it is about having 500% accuracy. you need as much redundancy as possible which I think will include:
1) Multiple redundant sensors on cars
2) V2V communication
3) V2I communication

And a self driving car should be able to operate in full self driving mode with any one of those 3. But all 3 would be available at once.

Not too fast, a human driver is more than a set of sensors. It is sensors plus a brain able to make sense of it all. So, if the sensor part is done, there will still remain the brain part. In other words having a true AI.
But of course having a true AI will pose a whole more questions than just using it to drive a car.

I’m sure that in new cars in 2-3-4 years time, they will feature a combination of cameras, lidar and radar. Even at Tesla.
It is a natural thing. T
We see many work on cheap small sensors. Smart software will use them all, and use the best in each situation. The software will compensate for defective or dirty sensors and so on.
The larger lidar sensors will probably still have a market in buses, and special vehicles.

I’m here basically saying what is already said in the above posted article… just adding in some technical detail: Elon will be proven correct about LIDAR. Because… LIDAR is very useful for precise indoor navigation, precise outdoors navigation in optically clear/clean conditions, high altitude target guidance, and SpaceX docking to the International Space Station. Recent advancements in LIDAR algorithms do allow some filtering of noise from fog, rain, & snow conditions but thus far has proven to be of limited use requiring the read target to be near the LIDAR sensor… the further out the target the higher the noise fill-in… sort of like running between rain drops in a straigh line… the longer the run the less chance of having a clear run path… thus the further the target from the LIDAR sensor the lower the confidence of the return data. That’s why LIDAR as the *primary* sensory mechanism for a car autopilot requires a reliable backup sensor system when the car enters an adverse weather environment. What Musk is saying (and what most fail to grasp) is that if a LIDAR backup system is anyway ultimately required for full autonomy in a wide band of weather conditions then… Read more »

In fact, I would say that Tesla needs to add 2 more radars and then they are good.
Basically, they need better vision on 90 on the side, sweeping forward to about 10-20 degrees. And these, like the front, need to be seeing far. The reason is for dealing with side streets.

Seeing dowd Side Streets, AND Avoiding getting T-Boned!

Seeing mostly what is in front of the vehicle, can, at best, deal with 25-35% of actual Accident situations, since things happen from behind, like in a rear ender; or the Sides of the road, like Constuction Areas; and from the Mid-Cardinal Points halfway between In Front and at 90° to the side, and the same points, between the 90° sides, and the rear!

If Autopilot only Avoids CAUSING an Incident or Collision, it is only half way there!

The 2nd half of safe driving, is avoiding getting Rear Ended, T-Boned, and Nerfed – as in some other spinning car, hitting you and spinning you, like getting a Cop doing a ‘PIT Maneuver’ on you! (PIT: Precision Immobilization Technique)
That also might be from a trailer your own vehicle is towing, too!

I remember a video from Bjorn where he created a wall of white blocks (probably polystyrene or something) to demonstrate Tesla safety system.

The Model S was supposed to stop, instead it drove right trough the blocks(with no damage, it looked very light). Video has since been taken down.

I guess the same feature that allows it to see through fog and so on caused it to miss a physical object.

The problem was that the ABS system which Bjørn Nyland was relying on to detect the wall of thin styrofoam sheets (not blocks) depends on low-resolution radar for detection. Radar is good at detecting many things, but not good at detecting very low-density material composed mostly of air. Radar waves pass right thru that kind of stuff without reflecting off.


Best practice would be for a self-driving car to be equipped with both lidar and high-resolution radar scanners. When it comes to saving human lives, a belt-and-suspenders type redundancy is arguably worth the extra expense.

Elon is just being stubborn about prioritizing saving money over making self-driving cars safer. In the long run, that’s going to lose in competition with better-equipped self-driving cars.

It’s very strange that Elon can’t seem to grasp that rather straightforward reality.

“What Musk is saying (and what most fail to grasp) is that if a LIDAR backup system is anyway ultimately required for full autonomy in a wide band of weather conditions then it’s best to improve that secondary system and do away with LIDAR.”

That may well be a good argument for using high-resolution radar instead of lidar, but it’s certainly not an argument in favor of cameras which use the same EM wavelength as lidar!

It also ignores the need for good lighting for cameras. Both lidar and high-resolution radar will work as well at night as in daylight.

If you can see, a passive camera should be able to see.

That’s not the case for lidar since it’s active.


Cars sold with LIDAR by Waymo, zero. Cars sold with LIDAR by Uber, zero. Cars sold with autopilot by Tesla, 100,000. The leader is who?

When LIDAR cost is low enough to be sold to the public and that the systems installation doesn’t seamless enough to kill highway range, Tesla will sell it too.

That’s adequate* for the limited driver assist features in current cars. It will be completely inadequate for Level 4 or Level 5, fully automated cars.

*That is, it’s adequate if you ignore the fact that the one-and-only verified case of fatality in a car operated under Tesla Autopilot + Autosteer was a case of (according to Tesla) the car mistaking the side of a semi trailer painted white with “a brightly lit sky”. We can be sure that if Autopilot was relying on active scanning using lidar (or high-resolution radar) rather than passive scanning using cameras, it wouldn’t have confused those things!

No, the fatality was due to the driver not watching the road. Why would anyone watching the road and seeing the trailer directly ahead not apply the brakes?

I agree, but we are talking here about fully autonomous cars, when there won’t be a human driver at all. There will be just passengers in the car.

That’s what Tesla (and Waymo and other companies) are working toward, and whatever sensors are installed in the car have to be adequate to support that level of autonomy.

Remember: this was before the repurposed their radar to act more like a LIDAR system. It also was Mobileye’s system, which they said Tesla pushed right to the Edge of its system capabilities!

Their own Mobileye next system was to go to an 8 Camera System, but they split from Tesla, and Elon did not yet have his replacement system fully up to speed yet, @ that time, but needed to ship!

Lidar is not widely available but Autopilot is and improved driver safety by 40%. When LIDAR pricing, packaging and chip computing speeds get to a point where it is commercially viable in a privately owned vehicle then Tesla will likely install them as well. Lidar isn’t proprietary and will be used by every manufacturer if/once proven best.

Just because Waymo drives around a parking lot a bunch of times doesn’t make them the leader in the industry, they need to sell the tech for that to happen.

Waymo plans on deploying a fleet of fully self-driving taxis in a suburb of Phoenix this year.

Dissing Waymo by suggesting their test vehicles don’t venture onto public streets, only shows how little you know about what Google and Waymo have accomplished.


Did you even read that article? It’s says that after nine years they are itching to get started.

4 million miles from 600 test taxis comes to 6666 miles per vehicle which could be covered in roughly 2 months of service. For a company with all the money in the world it sounds like they are just getting off the ground.

“Cars sold with LIDAR by Waymo, zero. Cars sold with LIDAR by Uber, zero. Cars sold with autopilot by Tesla, 100,000. The leader is who?”

And which one is actually working? (hint: not Tesla)

I still don’t see the attraction of autonomous vehicles, especially on a car I’m buying.

Maybe I just like to be in control for myself? AP seems more like a gadget.

You are missing the point. Makes the drive more relaxing. Think stop-n-go traffic on commutes, roadtrips, etc — I have 25K using AutoPilot on a variety of these conditions. All my other drivers on some of these talk about staying engaged but more relaxed and less weary at the end of their drive.

u go drinking at a bar? AP.
U are above age 60 and having issues, such as no license? AP.
U have no license because you were drinking at a bar, so AP.
U are going on a long drive across the nation and have your family? AP
U are a company that wants to deliver goods from one spot to another? AP.

U have to keep ur hands on the steering wheel and stay alert at all times.

Not when we get to Level 4 or better autonomous cars. You’ll be able to sleep in the car, or read, or do paperwork, while the car drives itself.

You might if you were blind.

We saw Tesla demonstrate an autonomous drive recently, in that video they showed what the system was “seeing”. Maybe it was doctored to show the best possible run, or maybe it was genuine, but if it was genuine wouldn’t it be interesting for Tesla to include a mode where this system view could be loaded onto the centre display so you could see it while driving?
Waymo also has a video and give a great explaination about how this vision gives the occupant confidence in the system. Surely, if Tesla system is as good as we would like to believe, then this mode would give greater confidence to the owners, and also be an indication as to how much progress Tesla is making.
Even showing the supposed shadow mode would be very interesting. Did the system “see” all the things we think it should have seen?
That’s the tech output I’d like to see in a Tesla, something tangible and should be pretty simple for them.

They aren’t doctored, they just require additional post-processing before the augmented view is available for human consumption. Basically, both these videos did require editing and could not display the visuals live. It’s possible that it’s just the part of adding graphics/overlays is all that needs to be done.

However I think it’s more likely that the algorithms don’t directly relate to visuals (all the time) but instead probabilities based on which direction is the safest to drive in, veer towards, what speeds fit within a safety threshold, and so on. Essentially the car is dealing with less individual pieces of information and more aggregate information.

“We have to solve passive optical image recognition extremely well in order to be able to drive in any environment and in any conditions,” Musk said. “At the point where you’ve solved it really well, what is the point in having active optical, which means lidar? In my view, it’s a crutch… that will drive companies towards a hard corner that’s hard to get out of.” This attitude appears to be what is described as “If all you have is a hammer, then everything looks like a nail.” Sure, there are certainly things (like reading road signs) that you do need cameras for. But the simplest and best way to build up a SLAM* system is using lidar, not cameras. (*SLAM stands for Simultaneous Localization And Mapping technology, a process whereby a robot or a device can create a map of its surroundings, and orient itself properly within this map in real time.) Using cameras coupled with optical image recognition software to build up a virtual reality 3D “picture” of the environment is far less reliable, far slower, and requires far more computer processing than using lidar. Cameras also have the problem that, like the human eye, they can’t “see”… Read more »

I disagree. That attitude is more like if you’re assembling a dresser, then what’s the difference is using hand tools vs power tools? They’ll both usually get the job done, but power tools cost a lot more, it’s a package item that does multiple things in one, and it makes individual parts of the job quicker. One caveat of power tools is they’re bulky and you will need to fall back to a hand tool if a scenario warrants it.

However it still takes time to assemble such a large piece of furniture no matter how fast it drills. Plus it’s way pricier. So is there really any benefit if you’re going to require every dresser be assembled by end users with power tools?

You have completely missed the point.

As you say, you can assemble furniture equally well using hand tools or power tools. Hand tools are not inferior for that purpose, other than speed.

The same cannot be said for trying to use cameras for passive scanning, coupled with inherently unreliable optical object recognition, versus using active scanning with either lidar or high-resolution radar.

Slower processing doesn’t merely mean you need multiple expensive microprocessors to make up for the difference in speed. Slower processing means, for example, a longer reaction time by the software to reach a conclusion, which means less time for the car to react in an emergency.

And you’re ignoring other very real ways in which active scanning is better than passive scanning. Here is just one very clear-cut example: No matter how speedy or slow your computer is at running optical object recognition software, cameras cannot see in the dark any better than your eyes can. Active scanning with either lidar or high-resolution radar is not affected by whether it’s day or night outside.

Just because these are challenges doesn’t mean they’re unsolvable.

Extra processing power – lol. Add in more powerful processors? This is already being done. The same goes for slower reactions times; you have better processors so that you don’t have slower reaction times.

Seeing in the dark – Add headlights to every vehicle and make signs reflective. Wait all cars including the 1st person car is equipped with headlights. And people don’t have problems with seeing in the dark while driving. So why are you even bringing this up?

Being less reliable – That’s not an inherent problem whatsoever. That’s a problem with with implementation. And as explicitly stated, this is something Tesla is addressing. Just because image recognition in the past hasn’t always been correctly implemented doesn’t mean all image recognition is unreliable.

“Add headlights to every vehicle and make signs reflective.”

Yes, and paint deer and moose with reflective paint!

“Being less reliable – That’s not an inherent problem whatsoever.”

Regulators may feel differently…..

Lol ok

“Just because image recognition in the past hasn’t always been correctly implemented doesn’t mean all image recognition is unreliable.” Lots of people claim that the human eye is sufficient for driving, and assert that therefore cameras should be sufficient. What they are ignoring is that it’s not so much the human eye that is superior, it’s the visual processing cortex in the human brain which is superior. Reproducing that using computers and software is going to be an extremely difficult problem to solve, and almost certainly won’t be solved using current microprocessors and computers. It’s a problem that won’t be solved in the next five years. If and when it is solved, it will almost certainly be by research teams specializing in robotics and computer development. They have been working towards that goal for decades, with only limited success. This is a problem which will not be solved by Tesla or any other auto maker. I’m a strong fan of Tesla, but I know enough about programming and the limitations of optical object recognition software to understand that this is not a problem Tesla can solve on its own. No matter how much you work with that hammer, it’s never… Read more »

P-P, as an aside: “No matter how much you work with that hammer, it’s never going to work as a screwdriver.” – Some Construction Workers would disagree with that!?

cameras cannot see in the dark any better than your eyes can.

That depends on the camera but even if it were true here in California cars are equipped with lights so we can still drive at night.

Self-driving cars need to be able to “see” in all directions, not just the narrow forward arc illuminated by headlights.

Ever hear of a T-bone collision? Ever hear of a deer running out onto the road at night? I dunno about you, but I’d like any self-driving car in which I was a passenger to be designed to avoid such things.

There is also the question of whether or not headlights actually provide sufficient illumination. According to some studies, they don’t light up the road far enough ahead for driving at freeway speed (see link below). Shorter seeing distance means shorter reaction time is required to avoid an accident. That’s a limitation which applies equally to self-driving cars. Computers and software may react far faster than humans, at least in ideal conditions, but inertia affects self-driving cars every bit as much as human driven ones. Tires don’t grip the road any better, and brakes don’t work any faster, just because the car has a robot driver rather than a human one.


Another Euro point of view

So you seems skeptical that current self driving technical options Musk is taking regarding autonomous driving will give good results because you seem to have good knowledge on this subject. Being a certified accountant and able to read financial reports I am skeptical (understatement..) when gross margins as reported by Tesla are compared to the rest of the industry as Tesla does not include in it R&D and distribution costs as the rest of the car industry does. My skepticism is FUD but not yours ? How come ?!?

In short, because my skepticism is honest, and yours isn’t. That is obvious, because I praise Tesla when I think they deserve it, but criticize Tesla in cases such as this, when I think they’re headed in the wrong direction.

Contrariwise, you rarely if ever praise Tesla, or even post a neutral opinion.

My skepticism regarding Elon’s claims on this narrow subject are a result of my understanding of physics and science, while your “skepticism” is merely a symptom of your Tesla-bashing agenda.

So you can stop pretending to believe your intellectually dishonest Tesla bashing has a value equal to anyone’s honest opinions. I’m sure you’re not actually clueless enough to believe that.

“If all you have is a hammer, then everything looks like a nail.”

You nailed it! 🙂

Usable lidar is too expensive and bulky for consumer cars today. Musk can either:

1. Wait for lidar like everyone else
2. Claim lidar is unnecessary and sell pipe dreams

Camera vs. Lidar is a false choice. Waymo, Cruise and others use camera PLUS lidar (plus radar, etc.). Their sensor fusion s/w uses all sensor data AND sensor confidence levels to create a 3D world view. If one sensor’s confidence level is low, e.g. due to bad weather or sensor failure, the system de-emphasizes that sensor when creating the world view.

Degraded confidence causes the system to operate more conservatively, e.g. reduce speed and increase following distance. Humans do the same thing (well, some of us do).

Tesla’s FSD system may one day be as good as one of Waymo’s degraded modes. Probably not, though. They’re way (mo) behind and trying to solve a more difficult problem. It’s really a Hail Mary pass at this point.

“Degraded confidence causes the system to operate more conservatively, e.g. reduce speed and increase following distance. Humans do the same thing (well, some of us do).”

Thank you, that’s a point I should have made too.

In conditions of heavy rain or snow, or fog, human vision is degraded, with sight limited by distance. Visible light cameras have the same limitation. The only safe way to drive in such conditions is to slow down significantly. What Musk is saying by asserting they should accept degraded senors at all times is logically equivalent to saying that self-driving cars should drive slowly at all times so that it won’t matter if their sensors are significantly degraded in heavy rain, fog, or snow.

This is not a rational approach to designing self-driving cars.

We can accomplish more if our toolbox has more than just a hammer in it. Adding a couple of different screwdrivers lets us accomplish more.

I agree with your there. I think the problem *could* be solved with just cameras – and humans are the proof that in general it is solveable with just cameras.

But why would you restrict yourself to cameras in the first iteration of an autonomous driving system? In the first system, you take the *easiest* route, and this means more sensor hardware.

Seriously, for a self-driving car, who cares if it costs a couple of thousand dollars more. If it costs 10K more, that is 3 months cost of hiring a driver. Except that the system will be useable for *years*.

Musk knew that in the lidar autonomous driving game he had already lost and couldn’t bring anything to market in the short run. And he doesn’t have the stamina to develop this with a 10 year timeline. He needed something to sell *yesterday* and offer it to his customers to show the technological superiority.

Musk’s “First Principals” approach will surely work….if the software can be written.
Please don’t miss that there is another entry in the forward sensor field: Imaging LADAR.
This page has two Imaging LADAR videos of the same drive made over a decade ago in Santa Barbara. Each pixel of the camera is a LADAR receiver, so each pixel carries distance information. One view is exactly what the camera was seeing head-on: red is close, green is distant. The other is a computationally-generated overhead offset view to highlight the depth of the data set. I suspect someone is looking closely at this technology.


LADAR is just another term for lidar.

I think what you are trying to talk about is infrared lidar. While that does use somewhat longer wavelengths than visible light lidar, the wavelengths are still shorter than radar.

But despite what Musk says, longer wavelengths are not entirely good. While it’s true that longer wavelengths can better penetrate rain, snow, and fog, it’s also true that they give you a more fuzzy, more blurred, image of the environment. Blur it too much and you lose the ability to see the edges of objects, and you lose the ability to see small objects.

Infrared lidar is an interesting tech, but requires the sensor to be cooled below the ambient temperature of the environment. If the goal is to put multiple cheap, small solid-state lidar units into cars pointed in different directions, then the need for cooling would make things difficult because it would both drive up the price and make the units bulkier. However, that might be worth it, if that would allow lidar to “see” thru fog, as well as not being degraded as much by rain and snow.

Thanks for the response. The particular sensor used in the demonstrated imaging LADAR is an uncooled InGaAs array. The temperature of the array IS stabilized, however, with a simple TC cooler at about 10C, I believe. Working in the SWIR band, these cameras see well through most fog and dust conditions. A key advantage is that SWIR light passes through conventional glass, meaning the camera could be mounted behind the swept windscreen. And in the SWIR band, reflected light is the primary source of information, with little thermal information in that band.

As long as you have no brain for interpretation, that is superhuman, camera vision will not be super human and you need much better sensors instead. This article reads like it wants to say radar=lidar with just another wavelength, which is stupid and wrong, radar photons? Really? i hope he has not said that… Teslas hardware will not be good enough for level 5. Accept it.

What’s wrong with radar photons? Are you sure you understand this?

“…radar photons? Really?”

Yes, really. All types (and all bandwidths) of EM (electromagnetic) energy are comprised of photons. Radar and lidar use EM for sensing; radar uses radio waves (comprised of photons) and lidar uses either visible light or infrared waves.

/basic science

Yes, you can model every electromagnetic energy as particle in a matter free space, and you can call it photon, but that does not mean that it is smart to do it, as the of effects defers highly between light and radiowaves in our observation. Light shows behavior of waves and particles. Tell me where radiowaves show the behaviour of particles.

It is like when somebody would say orange juice and coke are basically the same, a solution of carbohydrates in a ph negative water based fluid. Yes, but whats the point of this as it tastes completely different.

Why not use both lidar and radar?

If cameras are really that bad, then why is an autonomous 4k camera drone using only cameras to navigate through trees with no problems? The darn thing can even avoid twigs coming out of the trees. It also anticipates where the subject being filmed is going to move next. All that power fits into a drone you can hold in one hand.
Skydio drones are even using the same chipsets that are in Teslas now. So this idea of needing Lidar is just bunk. Radar and Cameras are more than enough to navigate a car through traffic better than a human can.

Lower stakes with a drone crash, and it doesn’t fly in adverse weather conditions.

I watched the video and while it is impressive it was shot in a low contrast scenario which doesn’t require much DR (see my earlier response) and doesn’t reflect the environment we often drive in. While as one poster mentioned in their reply to my DR post one can use two cameras to cover a wider DR AFAIK Tesla isn’t pairing up cameras today. Don’t get me wrong, I think someday cameras might work as primary sensors but that day isn’t here yet. I certainly look forward to autonomous cars but I think it will be the generation of cars after the used 2019-2020 model 3 I buy in 2023-2024. This is a great thread BTW. I have learned a lot.

I think one point that is missed here in the conversation is that human capability is not a guidepost for autonomous driving. Therefore the passive human visual system does not proof that a camera is sufficient.

Humans’s make mistakes that lead to numerous injuries, disabilities, and deaths. People will not accept an autonomous system that is as dangerous as a human driver.

So you better make sure that you get the state of the art sensors and processing power together to show clear superiority over a human driver. Anything less is not going to be acceptable for an activity where the consequences could be dire.

I try to avoid squirrels.

if it would never rain and it would never have fog and it would never snow … the autonomous driving would work with such limited sensors as radar or Lidar.
Only a combination of different sensors will be the key for the car to “see”

p.s. just read the drivers manual (e.g. of Tesla) and don’t only listen to the PR messages … “The view from the radar sensor or camera(s) is obstructed. This could be caused by dirt, mud, ice, snow, fog, etc.”