Tesla Says It Recreated Its Own Autopilot Version Of Mobileye In Just 6 Months



Tesla Autopilot

Tesla Autopilot

In just six months, Tesla recreated its own proprietary version of the Mobileye Autopilot technology.

It took Mobileye years, and millions of dollars to create the chip that helped power Tesla’s Autopilot system. In September of last year, the two companies parted ways due to disagreements in the timeline of the technology (the split was actually pretty messy). Tesla had to rebuild the system on its own, from the ground up. Tesla’s Autopilot 2.0 system uses eight cameras, 12 ultrasonic sensors, and forward-facing radar.

Tesla Autopilot

Autopilot 2.0

For a period of time, owners of cars with second-generation Autopilot had to wait for the system to come to parity with the original version of Autopilot. This has happened over the course of time, in the way of progressive incremental roll out of over-the-air software updates. Musk’s announcement at the Q1 earnings call, that Tesla completely recreated the technology, helps to make more sense of what has been happening all along.

As much as Tesla surely would have preferred to get the system up and running at its full potential as soon as possible, safety was of the utmost importance. Rushing the development of the system could have had dire consequences. While waiting six months or so likely seemed like forever to those in possession of the second-generation Autopilot vehicles, six months is impressive in retrospect.

Musk is still confident that a Tesla vehicle will be able to traverse the U.S. without any driver intervention, prior to the end of the year. He shared:

“November or December of this year, we should be able to go from a parking lot in California to a parking lot in New York, no controls touched at any point during the entire journey.”

Aside from the Boring conversation, Musk spoke to the Autopilot system at the recent TED 2017 conference in Vancouver. He said (via Teslarati):

“Once you solve cameras for vision, autonomy is solved; if you don’t solve vision, it’s not solved … You can absolutely be superhuman with just cameras.”

Mobileye was recently acquired by Intel for $15.3 billion.

Source: Teslarati

Categories: Tesla

Tags: , , , ,

Leave a Reply

61 Comments on "Tesla Says It Recreated Its Own Autopilot Version Of Mobileye In Just 6 Months"

newest oldest most voted

Hindsight is 20/20.

What about hindsonar?

If you don’t want to DRIVE a car, then don’t buy one, get a chauffeur instead! At least you’ll be creating a job for someone.

The autopilot is not forced on you, as you can drive.

It’s nice to have to have the option to drive, and have the safety backup of autopilot, like emergency braking.

Exactly. And while I love to drive when there are opportunities to have fun or to show gas car drivers how disadvantaged they are in their museum pieces, if its stop and go I kick in autopilot and relax, same on a bridge with strong sidewinds, why struggle when the machine does it perfectly. Free path on a fun and curvy road? I take over again. I do notice though that it does educate me to leave more space in front of me, I am pretty certain that it made my driving safer and more fun.

AutoPilot is increasingly forced upon us in my mind, because:
-you want to reach for controls w/o looking, and you can’t
-you want driving info, at a glance
-you might get into an accident, because either 1 & 2, quickly followed by another “Humans aren’t safe” statistic.

If one wants a compelling electric car, they may not like the above, and feel “forced” to give up what they can otherwise get elsewhere (that includes a premium coil suspension). Do they have to buy it? No. But the point is Tesla keeps electric drive hostage to its image of a car. They get to dial this into a box because there’s enough demand, and nobody else offers a compelling EV.

I’m pretty sure people have an image of what a Tesla is, and autonomous driving is part of that image. With the exception of range, there are plenty of EV choices and more on the way, including range.

Besides, if you don’t want autonomous driving, don’t pay that optional price, the car will drive just fine without it.

Oh, and the are plenty of cars that don’t have all the traditional buttons and switches, the Toyota Corolla is part way there, and personally I don’t like the lack of tactile feedback with touch screens, the corolla has very poor hit spots and detection, so you often have to repeat the action. Get that right at least and I’m sure it would be much better.

In a recent video it looks like the driver waves their hand in the Model 3 and the wipers go, so maybe Model 3 will have a trick or two that we are not expecting.

Emergency braking? If you aren’t paying attention while driving then you shouldn’t have a drivers license nor be on the road


This is like saying if I don’t want to cut my OWN hair, then I shouldn’t have any.

(Not to mention that people who have chauffeurs to drive them also often own the car!)

You just described SkinHead Uber Drivers… 😉

I thought they still hadn’t released high speed emergency braking yet.

“Once you solve cameras for vision, autonomy is solved; if you don’t solve vision, it’s not solved … You can absolutely be superhuman with just cameras.”


I was wondering about this assertion too. I guess the idea is that if you are relying on Lidar, you can’t read the road signs etc. that are an essential part of driving. This seems overstated; no reason Lidar can’t support a visually-based system.

You guys are dopey.

By “superhuman”, I think Musk is saying that, just like with everything else, it is possible for computers to “see” better than humans, yes, even just using cameras.

He might also being implying that the computer still will be able to react more quickly to what it sees. The computer also won’t be affected by being sleepy, or having misplaced its glasses, or having poor depth perception (in fact, a computer could potentially see with both horizontal and vertical depth perception — no human can do that without tilting their entire head).

Let’s also not forget that the cameras can be much more capable that human eyes (e.g., see in very low light, avoid being blinded in bright light, see infrared and other kinds of light that humans can’t).

Lidar is absolutely essential, Tesla is pretending.

I don’t believe Tesla’s pretending in the least. I think he’s aiming to prove it sometime this year. Being “superhuman” with just cameras, radar and sensors shouldn’t be too difficult, considering how we humans drive just using our limited senses. The trick is going to be consistency and how to handle the unexpected.

Lidar is much more precise and quicker, cameras can not do it all. When you don’t have the tech you claim you don’t need it.

In much the same way that some people, if they lack understanding of a concept, will make sweeping (and blatantly incorrect) statements criticising those with greater abilities.

I understand LIDAR, do you?
I also understand why projects use it.

You drive with a single, forward facing binocular camera with limited depth perception, about 2 degrees of useful detailed vision, and some interesting non-linear light collection characteristics. Your cameras are not particularly advanced, and, in fact, were originally intended to operate under water, and still haven’t fully adapted to use in air after 500 million years of trying.

You drive moderately well even with this massive vision deficit because you have some decent learning software running in your brain.

Why do you view self driving cars any differently? 8 cameras around a vehicle are already better than human vision, by quite some margin. If you want to add in LIDAR, RADAR, or thermal imaging as additional safety features then great… but they’re certainly not required. This is a software problem, not a sensor problem.

Notice that humans are able to drive cars without LIDAR augmentation. I guess humans are a poor benchmark, since they crash so much.

Sometimes humans crash because they didn’t see well, or could not react quickly enough to what they did see. Both of these are areas where potentially a computer could excel and do much better than even the best human driver could.

Most often though, humans probably crash because they are doing something *other* than “seeing” and “reacting”. On the other hand, the computer is always seeing, always driving.

I am always seeing while driving. Computers are inferior to me.

You can split hairs and argue that LIDAR can measure distance more accurately than binocular vision, but what really counts is “good enough”. If binocular vision can resolve to about 6″ and LIDAR to 1/4″ who cares? To say that LIDAR is essential is obviously false as Tesla already has cars driving without it and humans have been driving without LIDAR for over 100 years.

When you drive to work, what part of you does LIDAR? None. That’s what Musk is saying, the road system is visually based for humans so therefore it can be solved visually based. You also only have two cameras both facing the same direction that are easily distracted and can only have a limited range of moment.

The LIDAR debate is interesting and an important concept in autonomous driving, but each side tends to ignore the other’s strengths. LIDAR certainly IS simply much better awareness, and on its own is so superior as to be a no-brainer. But that leaves out the little factor of cost, which is more important than you think. LIDAR is crushingly expensive right now, and is not even close to cheap enough to put on hundreds of thousands of cars. This is crucial, as Tesla feels that VOLUME of data is more important than LIDAR. Tesla’s cameras have over a billion miles of real world driving data, including thousands of “rare” and dangerous/evasive driving events that are difficult to plan for. These rare events are the reason that autonomous driving is so hard. You just can’t have both right now. Some day in the future, sure, you could have LIDAR on every car, but we’re not even close to that now. It’s just too expensive. Tesla decided that getting the data (and being able to deploy the tech without doubling the price of the car) was worth it. They didn’t just skip LIDAR, there’s a good reason for their decision.

If they can create somewhat human level AI, then they can claim super-human piloting, since cameras are better than human eyes (and more of them, and always looking at the road).

But the crux is the AI, the brains, that makes the decisions. The camera systems are already super-human, but the intelligence isn’t there yet (IMO). Obviously the computer can process much more data, and process it faster, than a human brain, but w/out the correct algorithms, it’s still not human level AI.

Kdawg, I think we need to state that more strongly.

Actual A.I.* has only progressed to the level of a medium-smart bug. A.I. researchers are trying hard to get robots up to the level of really smart bugs.

The idea that they’ll achieve machine consciousness or (an even harder goal) near-human intelligence… well, let’s just say that is still very firmly in the realm of science fiction, and is almost certainly still decades away. I have no doubt they will achieve that someday, because an awful lot of resources are being used in pursuit of that goal. But certainly near-human A.I. won’t appear soon enough to help with developing fully autonomous driving, which will likely happen within five years.

*Actual artificial intelligence (or machine intelligence), and not expert systems software that is mislabeled “A.I.” as a marketing ploy.

Bug level intelligence seems fine to drive a car. I don’t know that Tesla will achieve that but I think they might get close enough.

Have you seen bugs hitting your windows again and again or hitting your car on the highway, even if they were able to see your car, but could not calculate its speed and physics?
Vision is not the key to autonomous cars, it is interpretation. While 100% of the image humans perceive, contain information and can be interpreted up to minor details like reflections, light in the fog, etc, computers only extrakt little key elements. Humans are even able to guess distances with one eye only and basically run a physics simulation of their environment in their head, computers are not able to do this today and therefor need much much better vision to counterbalance their inferior interpretation skills and computing power.

What Musk is doing here is the equivalent of saying “I was able to build a light bulb within hours, while Edison needed years… we are so much better than Edison”. There is a difference in being a pioneer in computer vision (Mobileye) and just trying to copy a product which is already there and which they licensed before (Tesla).

I am pretty sure Teslas current system is inferior to Mobileyes system. I really hate Musks superciliousness.

Kdawg – human eye has the equivalent of 450 megapixels. It also detects many times more shades than existing car cameras, with much faster response times.

Cameras, on the other hand, can see infrared, 360 degrees, etc.

You are correct that visual processing and interpretation is the real issue.

I think so. Tesla is putting up a brave face, but cameras just aren’t going to cut it for safe autonomous driving. Waymo (formerly Google’s self-driving car project) uses cameras, lidar, and radar, in a belt-and-suspenders manner.

All autonomous vehicles operating on public roads are going to need longer-range (like, 250 meters/yards or more) active scanners, and that means lidar or radar, or both.

Relying on camera images (passive scanning) means relying on the far-from-perfect ability of software to find edges of objects in those images, then build up a 3D “view” of the world from flat images, then try to determine speed of objects from that.

Contrariwise, active scanning provides instant, positive data on the distance and speed of objects, without needing to rely on imperfect software interpretations.

Really, it’s a no-brainer for those who really understand the issues and difficulties involved.

Tesla is certainly achieving some tremendous accomplishments in the field of autonomous driving, but y’know… physics.

Actually, yes. When you look at something your focus is very specific. Now take a digital photo and take a look around, there is detail in the whole image. Computers can process the whole image, so where you might miss the speed sign off to the side, or that car speeding through the intersection, a computer analysis of the whole image will “see” these details. Super Human.

Sometimes we capture these details in our peripheral vision, but it is not our main focus, unlike the computer.

Computer vision is very difficult in any case. It is amazing how far it has come along.

The only minor difference is that MobilEye worked for its purpose, and Tesla’s own AP2 is “90% complete” as usual since it was promised in December 2016 😉

Just don’t trust that when those side lines go from grey to blue, that the car will actually stay between them.

You can still absolutely go super-off-the-road.

“Six months”? Tesla started on Tesla Vision over a year ago – maybe even two or three years ago (remember – Mobileye didn’t initiate the breakup until they realized Tesla was working on their own stuff internally), and it hasn’t reached feature parity with the Mobileye system yet.

It took at least twice that long. Maybe 6x that long. And it’s still missing a few convenience features, and Mel still has wavy lines.

Thank you for that reality check.

The Tesla hype machine is in full force here! Speaking as a Tesla fan, I find this claim embarrassing.

Except AP2 is not yet at parity. And Musk has said Tesla was working on their own system for a year prior to the Mobileye divorce.

But other than that, yeah.

Much easier to build something when you know it can be done, and you have a working system to test against.

Besides the whole story being based upon a false claim that Tesla only started in Sept, why would it be surprising that known technology is easier to duplicate the second time, than it was to create the first time?

This happens all the time. It took much longer to build the first cell phone or smart phone than it took for later companies to duplicate the process.

Oh come on. I’m sure that they had a skunkworks project working in the background for a long time. YOu don’t create such a system in 6 months.

Companies do this all the time…they have a partnership with someone but meanwhile, in secret, they are building their own system to ditch the partnership.

I think Tesla/SolarCity is also doing this with solar PV inverter and battery controller makers.

Yup, it’s called being smart. Autopilot is a technology I would want in house.

Yup, that is one of the synergies of Tesla buying Solar City. Tesla has been building their own inverters dating back to the Roadster. Now Tesla Energy is putting their own inverters into powerwalls, etc.

An old article from Cleantechnica on why Tesla did not choose LIDAR

Thanks for that link, Bon Bon!

Looks like a pretty comprehensive coverage of the subject, altho I’m not sure about the claim that lidar doesn’t work in foggy conditions. Some lidar systems use near-infrared wavelengths; won’t that penetrate fog just fine? Or does that require deeper infrared wavelengths? (Hmmm. A bit of Googling indicates infrared wavelength cameras need special cooling. If lidar scanning has the same limitation, that may well rule it out. However, since lidar scanners are not cameras, perhaps not?)

Tesla might be able to get away with using just radar and cameras. But trying to use only cameras is futile. It’s just not going to be adequate, because… physics.

Ha, ha, my x-ray vision works great … Not! Humans use a variety of senses, but when driving it is mostly vision (cite people who drive with ear buds in). So vision is totally doable because we do it. The main thing about computer vision is that it can process the whole image, unlike humans who mostly process the part of the image they focus on. And computers can use simultaneous multiple images, hence 12 cameras, sort of like when you drive with someone else and the yell “watch out” because they see the car on your right that you missed, etc.

So vision is mostly all you need. Now how can you make it safer? Ok, add radar so you can see through/around other vehicles, add sonar so you can see through fog, hell even extend the vision range to ultraviolet and infrared, then you can exceed human vision quite significantly.

there are various reasons why cameras might not be the best strategy but your “because… Physics” tagline, while it’s cute and pithy, doesn’t actually make much sense to me. Care to elaborate?

I agree with him but for a different reason. I don’t think we have good enough image processing software and AI yet. We humans evolved it over millions of years and we require 16 years of real world training before we let it drive a car.

But we can compensate for our not-so-great software by adding sensor systems that us humans lack and provide great information like radar, infrared, GPS data & maps, etc.

I pretty much agree with your assessment. It’s the “because… physics” thing that has me (with formal training in both physics and software) scratching my head. Physics has little to do with it; software, much.

Regarding the cross country trip; has Elon ever said how the car would plug itself in?

Nope. We haven’t heard anything along that line since Tesla demonstrated their “solid metal snake” robotic plug-in arm.

Maybe Tesla thinks that is sufficient, altho they certainly have not announced deployment of that at any Supercharger station.

As I recall, Tesla’s demo of the “solid metal snake” was done indoors; will it survive an outdoor environment? What happens when it ices up in winter conditions? Will it continue to operate? I’m skeptical. Personally, I think Tesla needs something more substantial (and unfortunately more expensive), but that’s just my opinion.

Ultimately, I think the market is going to choose wireless charging. I think Tesla is ignoring reality when they claim they’ll never put a wireless charger into a Tesla car. If Tesla actually persists in that, I predict it will be just as self-defeating as Henry Ford making a policy of “Any customer can have a car painted any colour that he wants so long as it is black.” That’s one reason Ford lost so much of its former near-monopoly to competitors, who were willing to give customers different paint colors.


Also the demo was done in such a way that we can’t tell how fast it is going. It strongly suggests it was a time-lapse video. While being a bit slow doesn’t matter since you walk away while it charges, being a lot slow (like taking several minutes to locate and start) could really start to add up since supercharging can be as quick as 20 minutes for a substantial (over half total) charge.

The issue I see with wireless charging is the inefficiency. While losses are only a handful of percentage points, at the energies consumed by electric vehicles that can result in some pretty significant costs. Why pay that extra money and why waste so much energy? To me, it seems the plug makes much more sense.

People want and pay more for a SUV so Tesla builds (heavy) Model X, but having wireless charging is stepping over the line?

Model X 24kWh/100km
BMW i3 17kWh/100km

They are not same classes of vehicle but if they wanted efficiency über alley they would build something similar to i3 instead. Dismissal of wireless charging on basis of efficiency just doesn’t make sense.

Make it an option, people who want it will buy and use it, people who don’t won’t.

Not yet they haven’t.

They still don’t have feature parity. Not just for automatic braking either.


Assuming Tesla achieved parity, which I doubt, we don’t know how long MobileEye took to build their system originally. I suspect they took longer, as MobileEye is an ASIC which has a long lead time manufacturing-wise, and they didn’t blow a thousand dollars per system for a high end GPU.

However, MobileEye’s system was designed five years ago, and they haven’t stood still. I’ll bet they have some amazing ASICs in the pipeline.

and no one went for the IRON MAN (2008) Movie quote? Kinda of the best fit for this..

“”Obadiah Stane: [shouting] Tony Stark was able to build this in a cave! With a box of scraps!””

Ah, well in 2020 when all the other Vapor ware cars are out and some sort of auto driving system. Will be nice to look back on this a laugh at where things where at and all the doubtfulness of when it will work as Level 5 Self Driving.

As an owner of an AP2 car, I’m not surprised it took them 6 months. Features and performance are still inferior to AP1.

So we should be impressed that it only took 6 months, instead of horrified that Elon thought it could be done in 7 weeks? 6 months ago is early November, 2016, and Tesla said it would ship in December 2016 up until January. Also, they shipped people hardware for a month before even starting on the software?

And they’re still lying. The current releases don’t do everything Mobileye did on AP1. They aren’t there yet.