Nissan Explains How Human Intelligence Will Ultimately Make Autonomous Cars Work In All Situations

JUN 8 2017 BY MARK KANE 11

Manufacturers are doing what they can to bring autonomous cars to market, but even with all that technology, human intelligence will apparently still be needed.

Autonomous LEAF In Front Of 1960s Circular, Spooky “Authorized Personal Only” Building

Nissan’s autonomous driving expert recently revealed that there is idea to use a human back-up, in a form similar to air traffic controller-style.

Nissan is working on this project alongside NASA.

How it works is pretty straight forward…although we aren’t sure how it would work in a practical situation.

When the driving conditions are deemed too complicated for computers, a human will remotely support self-driving cars.  Then afterwards, that solution will be available in the cloud for other vehicles to follow in similar fashion.

“The buzzword for the technology is ‘Distributed Artificial Intelligence’ in which cars, the Cloud and humans share their intelligence to solve a problem.

Nissan calls it Seamless Autonomous Mobility (SAM) and has been testing its operation on a fleet of disguised prototypes in California, US.”

Maarten Sierhuis, director of the Nissan Research Center and a former NASA engineer said:

“These are like air traffic controllers, they facilitate the flow, rather than control the vehicle remotely with a joystick,”

source: Autocar

Categories: Nissan

Tags: ,

Leave a Reply

11 Comments on "Nissan Explains How Human Intelligence Will Ultimately Make Autonomous Cars Work In All Situations"

newest oldest most voted
Doggydogworld

CA’s DMV recently proposed self-driving car rules that allow this type of remote supervision. I personally believe Waymo’s fleet in Phoenix will shift from in-car to remote supervision late this year.

Regulators strongly prefer this type of gradual transition over a “cold turkey” approach.

Mil

Isn’t this exactly what Tesla cars do with auto pilot.

Loboc

AI will advance to the point where this will be unnecessary. Even in aviation, human-controllers will disappear.

AI is exploding right now. It won’t be long before computers are more intelligent than humans. Especially for the narrow AI needed to drive a car.

It’s more about the sensors and laws keeping pace.

JIMJFOX

Highly unlikely.

After decades of development, computer-controlled airline systems STILL fall over- as British Airways just did. Talk of pilotless planes… would you fly in one??

pjwood1

AI will only accelerate the congestion problems caused by higher volumes. Your time vs. their money. It’s the only way to lower liability.

Pushmi-Pullyu

I’ve noticed a distinct bias against autonomous driving in your posts, Pjwood1. You are of course entitled to your firm decision to allow autonomous driving in your own car only when they pry the steering wheel from your cold, dead fingers. 😉 But your extreme viewpoint seems to be so biased that it’s distorting your view of the reality.

Unlike human drivers, autonomous cars will be (are being) programmed to cooperate, rather than compete, to enable better traffic flow and thus faster throughput. And when most cars on the road are autonomous, traffic jams will be avoided by centralized traffic control software overseeing traffic flow in an entire region, re-routing traffic as necessary around congested areas.

In the future, the few dwindling number of remaining human-driven cars will be treated by traffic control computers as dangerous erratic moving obstacles, which self-driving cars should as far as possible maintain a safe distance from. (Today that’s a joke. A generation from now, it won’t be.)

pjwood1
PP, I have and use these features, and feel a good understanding of their abilities and limits. “A generation from now”, says a lot about the realities of what people on this board are excited about, today. Why is that? When you fancy things that don’t exist, you might end up settling for less today. This over-shoot of image versus reality, is responsible for creating cars whose ergonomics are starting to suck. I mean, objective, simple, safe things like eyes on the road, ease of controls, etc. Tesla buyers are so transfixed on AI hardware potential, I think more than perspective is being lost. Just look at the Model 3?? I don’t share the utopian “traffic will be better” beliefs. Adaptive cruise systems are set for given follow distances. Certain roads adhere to different cultural norms. Sometimes 1.5 car lengths are an invitation to be cutoff, usually not. So, what is “safe” as an AI distance definition? You get that it’s directly related to harder braking and more distonce, correct? Manual driving will remain an option, and these min/max follow distances will also continue to be chosen by our brains. There is not as much difference as you might expect… Read more »
Pushmi-Pullyu
@pjwood1: Thank you for your thoughtful reply. It’s refreshing to have someone willing to talk about these issues in more than a superficial manner. “Tesla buyers are so transfixed on AI hardware potential, I think more than perspective is being lost. Just look at the Model 3??” Here I agree. Tesla has gotten out over its skis in designing the M3 for fully autonomous driving, when that has yet to be developed. And despite Tesla’s claims, I very seriously doubt their current hardware is up to the level of reliability necessary for full autonomous driving. As I’ve said many times, that needs active scanning in 360°, not just the front-facing radar scanners Tesla is now using. “Sometimes 1.5 car lengths are an invitation to be cutoff, usually not. So, what is ‘safe’ as an AI distance definition? You get that it’s directly related to harder braking and more distonce, correct?” Safe following distance is dependent on several variables, including road conditions, tire wear, brake effectiveness, and how smooth and even the road surface is, as well as (of course) speed. Presumably fully autonomous cars will observe their own ability to brake in a given distance, and adjust following distance accordingly.… Read more »
pjwood1
My quote: “There is not as much difference as you might expect between AI cars set to ‘safe’ distances and the way they will react, versus human drivers…” Part of the reason I said this is because drivers set the “AI” themselves, or will only be willing to buy cars where they can influence what this distance should be. That will perpetuate the traffic stop&go snarls. You’re banking on not just technological success, but the sponsorship by substantially every driver to purchase and sign away control. That’s a high bar. PP: “You’re ignoring the probability — I’d say the near-certainty — that autonomous cars will have self-testing routines they run every time the car is started, and the car will refuse to run if the tests say it’s unsafe to drive. We already have EVs exhibiting that behavior; why do you doubt what we already have will continue to be used?” ..because one very appeal of the EV is to get away from the engine-light. You’re saying these sensors will tether the cars to service centers. Now, Tesla, Apple, Google may go with Mr. Smith to Washington, and buy regulations, but this would then explain why I’m not on this… Read more »
TM

Maybe we can feed the slate of politicians into an AI engine and they can select the ones most likely to do positive things.

Or maybe an AI Engine can run for office itself someday.

Pushmi-Pullyu
Makes no sense to me at all. The purpose of an air traffic controller is to maintain safe separation between airliners, by assigning them to different altitudes, directing them to change course when they get too close to each other, and (for example) putting them into holding patterns when approaching a busy airport. Airplanes should maintain a separation of minutes of flying time, except when taking off and landing, giving the air traffic controllers sufficient time to react and avoid a collision even with each controller dealing with hundreds of planes. I don’t see any equivalent situation in autonomous driving. If a self-driving car can’t figure out for itself what to do in a given situation, then it can either rely on previous mapping of the exact path humans have driven when driving the same route (the kind of mapping Tesla is doing, and presumably Waymo is also doing), or it can ask for a human driver to take over. In neither case does there seem to be advantage to having a human in the decision loop. Cars drive much too close together to allow for reaction time from a human “ground traffic controller”, and it makes no sense to… Read more »