Here’s What Tesla V9 Autopilot Sees

OCT 24 2018 BY STEVEN LOVEDAY 7

What does the world look like in Tesla Autopilot Version 9?

We’ve seen many different videos showing what Tesla Autopilot “sees.” This is due to the work of people in the Tesla community with the know-how to hack into the system and capture the camera feeds. This newest video is not only unique since it’s the first to show Software Version 9, but it also lets us see six feeds all at the same time.

The new software update finally allows Tesla’s neural net work with all cameras. While we can’t say definitively how much Tesla Autopilot has improved as a result of the update, you can clearly see that the cameras are taking in an impressive amount of the environment surrounding the car, and the technology seems to “see” and identify what it’s supposed to. Now, it’s just a matter of time before all of this valuable data can be processed to a level that assures vastly improved results.

If you’ve used Tesla Autopilot with the new Software Version 9, we’d love to read your impressions in the comment section.

Video Description via greentheonly on YouTube:

Seeing the world in autopilot v9

This is running 18.39.6 firmware. All autopilot cameras are utilized.

This video does not show “narrow” stream because it’s somewhat redundant with main.

You may notice certain choppiness on all the cameras other than “main” – this is because while main camera is captured at 36fps and is therefore very smooth, the “fisheye” is only captured at 6fps and the sides are at 9fps (the car captures all cameras at 36fps and backup camera at 30fps) – I needed to limit the rate of the intake to not overwhelm my storage device (it is still struggling a bit and that’s why sometimes framerate drops on some cams).

Source: Electrek

Categories: Tesla, Videos

Tags: ,

Leave a Reply

7 Comments on "Here’s What Tesla V9 Autopilot Sees"

newest oldest most voted
theflew

I would to see a similar video at night. Cameras work well on a sunny day. Things get interesting on cloudy days and at night.

BillT

Things also get interesting with a mix of bright sunshine and deep shadows or when driving with the sun pointed right at the lens. As a photographer I know contrast is a huge challenge to deal with. Maybe with the new chip they will be able to read multiple exposures per frame and combine them. Basically like doing HDR for stills but for video in real time.

JakeY

The aptina AR0132 they were using for all AP2.0 cameras (except rearview) supports the HDR video you are talking about. This allows up to 115dB (19 stops/EV) of dynamic range. They were also using a RCCC filter (instead of RGB) to increase the amount of light to the sensor.

They may have used different cameras however for newer vehicles and Model 3.

antrik

Well, since 2.5 hardware at least can provide full colour feeds, I guess that must use RGB or RCCB? (If 2.0 indeed used RCCC, I guess that might be part of the reason why the dash-cam feature is not available for these…)

Pushmi-Pullyu

You took the words right out of my keyboard.

Okay, let’s be wildly over-optimistic here, and assume that despite robotics researchers trying and failing for decades to get optical object recognition software to work reliably; despite that, Tesla’s software developers manage to get software that can interpret images as reliably as the highly developed visual cortex of the human brain can.

At best, that would mean the self-driving car could “see” as well as the human eye and brain can… which means they’re totally blind in the dark.

No, no, and no. The objective here should be to develop self-driving cars which work better at driving than humans do. Better means using a sensor system which is indifferent to day or night, yet capable of seeing the whole environment (unlike low-res Doppler radar), which means lidar or phased-array radar.

What’s the alternative, if we want self-driving cars to provide safety against T-bone collisions and rear-end collisions? Install dozens of “headlights” all around the car, pointed in all directions, so the cameras can “see” at night?

I swear, this idea that we are ever going to get reliable, safe self-driving cars using cameras as the primary sensors, is going from clueless to downright idiotic.

antrik

You are out of your depth here. What robotics researchers have been failing to do for decades is completely irrelevant: optical recognition was troublesome for a long time, but has made giant strides over the past couple of years — both on the algorithmic front (when researchers finally figured out deep learning), and on the complexity front, with hardware that enables networks larger by several orders of magnitude.

“going from clueless to downright idiotic”… That’s an apt description for you comments on this topic.

Jim Whitehead

From an AI guy: You should redo this with heavy side shadows in late afternoon, that mess up the visual systems in most autopilots.