Musk Might Do Cross-Country Tesla Autopilot Trip On V10 Alpha Software

Tesla

SEP 20 2018 BY STEVEN LOVEDAY 36

We’re still waiting on the much-anticipated cross-country Tesla Autopilot trip. Will it finally happen soon?

If you follow Tesla and the electric vehicle space, you’re probably well aware that Elon Musk’s promise of a cross-country Tesla Autopilot trip has been pushed back and put on hold for some time now. Once Model 3 production issues kicked in, along with the push for profitability, it seems that many less crucial plans had to be tabled.

This is not to say that it will never happen. While Musk is surely overly optimistic at times and sets the bar extremely high, he does have a reputation for following through on such promises, even if the promise comes late. Better late than never, right?

Musk previously stated that Tesla could have done the autonomous coast-to-coast trip in 2017, but it wouldn’t have been to the level that he was hoping for. He explained that it’s important that the Full Self-Driving tech is capable of working in any location, regardless of conditions. This is much different from using a pre-planned route and attempting to aim for cooperative weather. Musk said:

We could have done the coast-to-coast drive, but it would have required too much specialized code to effectively game it or make it somewhat brittle and that it would work for one particular route, but not the general solution. So I think we would be able to repeat it, but if it’s just not any other route, which is not really a true solution.

He admitted that while progress has been slower than expected, Tesla is surely moving forward, which we’ve seen much more indication of as of late. Musk continued:

It’s also one of those things that’s kind of exponential where it doesn’t seem like much progress, and suddenly, ‘wow.’ It will seem like well this is a lame driver. (Then,) like okay, that’s a pretty good driver. (Then,) like holy cow, this driver’s good.

Now, Musk noted that Tesla engineers are making more progress and Autopilot software Version 9 is on the way. Final testing is already in progress.

A Twitter user replied and asked if V9 would be used for a cross-country trip. Musk replied that it will likely happen with V10 Alpha software. While this doesn’t give us any idea of the timeline, it’s the first he has spoken of the trip in a while. This confirms that it is still set to happen, and if progress continues at this rate, perhaps it will be sooner rather than later.

Categories: Tesla

Tags: ,

Leave a Reply

36 Comments on "Musk Might Do Cross-Country Tesla Autopilot Trip On V10 Alpha Software"

newest oldest most voted

V10 timeline 2019 but really 2021

“This confirms that it is still set to happen, and if progress continues at this rate, perhaps it will be sooner rather than later.”

Of course a cross country AP trip is set to happen, otherwise they’d better stop selling the FSD option!

The only question is when. And that’s the tricky one. The rate of progress hasn’t been really impressive, but maybe it’s really exponential and suddenly everything will happen really fast.

But in general I applaud the decision to not show “cheated” cross country trip, just because they promised to do so.
I remember when they showed the first AP2 demonstration, with the X navigating through suburban traffic and everyone expected that to happen really soon. I guess many expectations were shattered since then, so better really just show off something close to being rolled out.

Seems likely level 4 autonomy is likely to take around 5 years and level 5 another 10 years after that. I think it’s very likely Tesla will get there first due to their huge data advantage, but it’s probably still a few years away.

I think it’s still a long way off and may not even happen in the lifetime of the Model 3, and that the Model 3 won’t possess the necessary sensors to do it. I think people seriously underestimate how dumb computers are. They are very fast/good at doing something you program them for, but as soon as they encounter an edge case they bomb completely.

I have too much experience with computers to have any interest in having one drive me. At least when I get a bsod on my PC I’m not traveling down the highway at 75 mph.

“Probably v10 Alpha build” is Muskspeak for “No”.

Autonomous progress is exponential in the opposite direction that Musk says. Progress is lightning fast at first. Demos are trivially easy. The first 90% is easy. The next 9% is really hard. The next 0.9% takes years.

Waymo is at the 0.09% or 0.009% level. Tesla is not yet at the demo level.

Unfortunately I have to agree. The last few percents are way harder to achieve… As always…

Lets hope that exponential growth of data can counteract the exponentially rising complexity of the last few percents…

That way we would be able to end up with some linear growth… more or less who cares… I step back from my former 8-8-2018 prediction (made that quite a while ago…)… Let’s go for 9-9-2019 …

But data only takes you so far. The freak occurrence that had never been observed before still causes the system to completely fail. Even in a completely artificial and limited environment like a computer game, AI can be incredibly dumb. With enough repetition a “neural net” can learn how to do it, but then change one little thing and it’s back to being completely clueless again.
And that’s in a very limited environment, unlike the real world which is a very messy place.

Tesla is skipping the “demo” level, the 90% and the 9%. They are betting that reliable vision is inevitable for the crucial <1%, and going directly for that.

Waymo only works in controlled environments. They haven't solved reliable vision either, to the best of my knowledge.

I think Musk was not speaking in technical terms, but more of the impression the software gives on humans. And normal humans (as opposed to AI techies) are looking at it from the other side: they look at error rate, not success rate.

Going from 99% to 99.9% may seem negligible, but actually the number of errors decreases by a whole order of magnitude, giving the impression of exponential improvement.

I know I’m going to get put down for this, but I still maintain they won’t succeed until they put in pairs of cameras for binocular vision. With binocular vision triangulation can be used to identify distance to objects and a 3D model of everything around the car can be presented to the AI system. Cannot calculate distance with only a single viewpoint.

To my best knowledge they have several front facing cameras (differet foci or so… don’t remember exactly…anyonehelp me?)… Let’s just hope they are spaced far enough from each other to make w calculations possible…

There are three front facing cameras set up for different distances, along with radar and ultrasonic sensors for judging distance.

“Cannot calculate distance with only a single viewpoint.”

Not using cameras, no. But active scanning with lidar, or a phased array of high-res radar, will do that much better, much faster (in terms of computer processing), and with a much higher degree of accuracy than two or 8 or 22 cameras ever could. Optical object recognition software isn’t that reliable, and adding cameras (beyond two) isn’t going to help that at all.

Sticking with cameras is just Tesla (and Elon) stubbornly trying to cheap out. The only thing that’s going to do is cause Tesla to fall further and further behind as other auto makers forge ahead towards true Level 4 self-driving systems.

News flash: Elon is smarter than you, and has studied the problem in way more depth. He might be wrong — but he is much, much more likely to be right than you are. (Or anyone of us, for that matter…) He is neither cheap nor stubborn: he just concluded, from the facts available to him, that Lidar doesn’t add real value. This conclusion will change when presented with data proving his assumptions wrong; and it won’t change otherwise.

As long as optical processing isn’t reliable, self driving outside strictly controlled environments won’t work anyway. Lidar is just a shortcut to early progress; but you still get stuck once you hit the hard issues.

Elon Musk also isn’t as smart as he thinks he is and arrogance is one of his Achilles’s heels (there are plenty of examples of this). There are lot of other smart people working on this at other companies–including some most likely much smarter than Musk–and the consensus obviously doesn’t agree with Musk on this. Just because Musk thinks something doesn’t necessarily make it true. He’s been wrong about a lot of things over the years.

I give credit to Musk for being a visionary, but his vaunted intelligence is really blown out of proportion. The same goes for other CEOs like Gates or Jobs. They’re all smart guys, they all had a good vision of what they wanted to accomplish. They weren’t generally smarter than everyone in existence.

This comparison makes little sense. As far as I can tell, Gates was never a visionary. (At least not a successful one — the one book he wrote was just bollocks.) He was just a good salesperson basically. Jobs was more of a visionary I guess, in the sense that he had a firm idea of what properties are important in his products. But other than that, he wasn’t striving towards any bigger vision either as far as I can tell. Musk is different, in actually setting out to tackle some big problem, and trying to map out a route towards that. But that’s all kinda beside the point. The point is that everyone who knows Elon, seems to agree that he is a brilliant engineer, with a propensity to see things clearer than pretty much anyone else. Of course he can be wrong; and he has been wrong in the past. But very often, he is right. And he is much more likely to be right than any layperson here, who formed a shallow opinion based on a few pop-science articles they read on the internet… Just to be clear, we still can and should discuss our own understanding… Read more »

Lidar has problems of its own, such as measuring distance to rain very well instead of solid objects behind it and it will be quite expensive.

Higher resolution radar, in conjunction with cameras, will be helpful. I think the omission of binocular vision is odd and now unjustifiable. It helps solve significant problems the same way humans do it.

It isn’t essential to recognize optical objects (as in classify what they are) as opposed to recognizing when something is coming within range of the car in its predicted path. That’s a more limited problem.

Tesla will continue to sell to drivers/owners and not be looking at level 5 taxi fleets for a long time, and for this purpose binocular vision, like humans solve the problem, should be quite helpful.

You are ignoring the fact that some humans don’t have binocular vision, yet can drive perfectly fine. It’s not as crucial for making sense of the environment as some seem to think it is.

So when watching videos, you have no idea how far the various objects on the screen are?

That is true, you only perceive distance through familiarity, you see a tree and a car, so you make assumptions of size and placement based on things like is the ground flat, is the base of tree above or below the wheels of the car, but if the tree is cleverly painted on the flat surface of the road you would be completely fooled. Try a simple experiment using only one eye extend your arm out to the side of your monitor by 8 inches or more. Point one finger at the monitor and then move your hand so that your finger contacts the upper corner of the monitor. With both eyes open this is easy.

Heh, I just did the experiment for fun, not because I expected any revelation. I fully expected to “fail” it — yet in fact I didn’t! 🙂

I lack binocular vision, so I know what it’s like. Yes, estimating exact distance can be challenging in some situations. I was never good at volleyball for example. For the vast majority of everyday tasks (including driving), you don’t need that kind of precision in estimating distances, though.

One thing to keep in mind is that when you are moving, or the objects you are observing are moving (and at least one of these will generally be true in driving), you actually get triangulation simply by observing changes over time. In contrast to actual object recognition, this isn’t a hard problem to solve at all. AFAIK it’s a long established method for 3D mapping.

single camera can use focus function to identify distance

I still think it is not possible to incrementally improve AP to FSD. The AI alone would need replacing. Not to mention the sensor arrays. It’s not a video game with zero consequences for losing.

I completely agree. FSD is going to require active scanners, using lidar or phased-array high-res radar, or both. Tesla stubbornly sticking to cameras as the primary sensors just ain’t gonna cut it, no matter how much Elon wants it to.

All those Tesla cars which currently have the FSD option… I predict that every one of them is going to have to be recalled and have the sensor suite upgraded. Either that, or Tesla is going to have to refund their payment for the FSD option… with interest. The latter would likely be less expensive, but the former will be the only thing that will actually satisfy Tesla’s customers.

Yet millions of people are driving dozens of miles every day, without having any active scanners… And very few accidents are caused by flaws of vision.

Yes, people can do that. Machines cannot. Until the problem is solved, we don’t really know how that machine must work.

Human intelligence is amazingly smarter and more adaptable than a computer. We have hundreds of millions of years of evolution to thank for it.

Yes, and that’s why it’s hard to make an AI that will behave right in every possible traffic situation.

But that’s a different layer entirely. The question whether to Lidar or not to Lidar, is a much more narrow problem. Creating a map of the environment doesn’t usually require “smartness”. It’s just some sensory processing. AIs are already better at certain vision tasks than humans, when trained with a good data set. The dynamic vision problem to tackle here is more complex of course — but there is no reason to believe that with sufficient computing power and good data, AI won’t be able to tackle is as well as humans can from a similar set of sensors.

As I’ve said before, I think Tesla is wasting its time fiddling around with incremental improvements to its current Level 2+ autonomous driving system. Tesla, and other auto makers trying for reliable full self-driving, need to restart, focusing on developing a reliable working SLAM* system. The SLAM is absolutely required as the foundation of any fully developed Level 4-5 autonomous driving system. That will require active scanning with lidar or phased high-res radar. Cameras will never be sufficient; software interpretation of camera images is too slow (in terms of computer processing speed) and too unreliable.

Without a reliable SLAM based on active scanning, no amount of fiddling with software and cameras is ever going to get much farther than Tesla has already gone. So far as I can tell from public reports, Waymo/Google is the only company that has developed a SLAM, and even their system may not yet work well enough — yet. I’m sure they’re working to improve it!

*SLAM stands for Simultaneous Localization And Mapping technology, a process whereby a robot or a device can create a 3D map of its surroundings, and orient itself properly within this map in real time.

So glad we have a true expert here, who knows better than the engineers wasting their time working on the wrong problems…

Let’s not pretend that engineers always find a solution to complex problems, and be within the original requirments.

Sure. Yet a layperson telling people actually working in the field that they are doing it all wrong and it can’t possibly work, is quite frankly pretty arrogant.

Cameras offer much higher resolution than LIDAR or RADAR. Cameras typically run at 60fps, I would think that should be fast enough. Is LIDAR or RADAR scanning faster? How fast?

Tesla has already developed new AI hardware they will have to retrofit in all older cars for FSD.

Resolution is one thing but dynamic range is another. If you want to get some idea of the challenge while playing passenger in a car record video out the front window with the best video camera you can get your hands on in a high contrast scenario such as at night with oncoming car headlights or on a sunny road with deep shadows. Then play it back and you will see how much “data” is missing. This will also explain why Autopilot equipped cars sometimes brake for the shadows of highway overpasses or miss white trucks against a bright background. One solution is to read the same frame at different exposures and combine them in software but that requires a LOT of processing power. Maybe Tesla’s new superchip will be able to do that but no video camera I am aware of today can do it.

The processing power needed for that is negligible compared to what’s needed for image recognition…

If you ever look into security cameras for a house or building, you quickly discover just how bad cameras are, especially in low light conditions. The resolution and recognition of the best camera is still absolutely terrible compared to the human eye.