Nissan Explains How Human Intelligence Will Ultimately Make Autonomous Cars Work In All Situations

2 months ago by Mark Kane 11

Nissan LEAF’s autonomous drive demonstration event – London

Manufacturers are doing what they can to bring autonomous cars to market, but even with all that technology, human intelligence will apparently still be needed.

Autonomous LEAF In Front Of 1960s Circular, Spooky “Authorized Personal Only” Building

Nissan’s autonomous driving expert recently revealed that there is idea to use a human back-up, in a form similar to air traffic controller-style.

Nissan is working on this project alongside NASA.

How it works is pretty straight forward…although we aren’t sure how it would work in a practical situation.

When the driving conditions are deemed too complicated for computers, a human will remotely support self-driving cars.  Then afterwards, that solution will be available in the cloud for other vehicles to follow in similar fashion.

“The buzzword for the technology is ‘Distributed Artificial Intelligence’ in which cars, the Cloud and humans share their intelligence to solve a problem.

Nissan calls it Seamless Autonomous Mobility (SAM) and has been testing its operation on a fleet of disguised prototypes in California, US.”

Maarten Sierhuis, director of the Nissan Research Center and a former NASA engineer said:

“These are like air traffic controllers, they facilitate the flow, rather than control the vehicle remotely with a joystick,”

source: Autocar

Tags: , ,

11 responses to "Nissan Explains How Human Intelligence Will Ultimately Make Autonomous Cars Work In All Situations"

  1. Doggydogworld says:

    CA’s DMV recently proposed self-driving car rules that allow this type of remote supervision. I personally believe Waymo’s fleet in Phoenix will shift from in-car to remote supervision late this year.

    Regulators strongly prefer this type of gradual transition over a “cold turkey” approach.

  2. Mil says:

    Isn’t this exactly what Tesla cars do with auto pilot.

  3. Loboc says:

    AI will advance to the point where this will be unnecessary. Even in aviation, human-controllers will disappear.

    AI is exploding right now. It won’t be long before computers are more intelligent than humans. Especially for the narrow AI needed to drive a car.

    It’s more about the sensors and laws keeping pace.

    1. JIMJFOX says:

      Highly unlikely.

      After decades of development, computer-controlled airline systems STILL fall over- as British Airways just did. Talk of pilotless planes… would you fly in one??

  4. pjwood1 says:

    AI will only accelerate the congestion problems caused by higher volumes. Your time vs. their money. It’s the only way to lower liability.

    1. Pushmi-Pullyu says:

      I’ve noticed a distinct bias against autonomous driving in your posts, Pjwood1. You are of course entitled to your firm decision to allow autonomous driving in your own car only when they pry the steering wheel from your cold, dead fingers. 😉 But your extreme viewpoint seems to be so biased that it’s distorting your view of the reality.

      Unlike human drivers, autonomous cars will be (are being) programmed to cooperate, rather than compete, to enable better traffic flow and thus faster throughput. And when most cars on the road are autonomous, traffic jams will be avoided by centralized traffic control software overseeing traffic flow in an entire region, re-routing traffic as necessary around congested areas.

      In the future, the few dwindling number of remaining human-driven cars will be treated by traffic control computers as dangerous erratic moving obstacles, which self-driving cars should as far as possible maintain a safe distance from. (Today that’s a joke. A generation from now, it won’t be.)

      1. pjwood1 says:

        PP,

        I have and use these features, and feel a good understanding of their abilities and limits.

        “A generation from now”, says a lot about the realities of what people on this board are excited about, today. Why is that? When you fancy things that don’t exist, you might end up settling for less today. This over-shoot of image versus reality, is responsible for creating cars whose ergonomics are starting to suck. I mean, objective, simple, safe things like eyes on the road, ease of controls, etc. Tesla buyers are so transfixed on AI hardware potential, I think more than perspective is being lost. Just look at the Model 3??

        I don’t share the utopian “traffic will be better” beliefs. Adaptive cruise systems are set for given follow distances. Certain roads adhere to different cultural norms. Sometimes 1.5 car lengths are an invitation to be cutoff, usually not. So, what is “safe” as an AI distance definition? You get that it’s directly related to harder braking and more distonce, correct? Manual driving will remain an option, and these min/max follow distances will also continue to be chosen by our brains. There is not as much difference as you might expect between AI cars set to “safe” distances and the way they will react, versus human drivers having different preferences when they also come upon traffic. Once one car crosses in front of another, the “too close” chain reaction will still happen. So then, we fancy car to car communitcation, but that won’t work for another 20 years as that’s how long it takes for a fleet of U.S. autos to effectively roll to perfect harmony. And this assumes some guy with an old “AI” car isn’t gonna still drive with a broken sensor, etc. (It also assumes we want/will pay for/ AI cars..) I don’t think my view is a distortion of reality. Cruise control was introduced decades ago, and auto-steer is now “the bees knees”. We’re going a bit crazy, to talk about this AI stuff without realizing it’s so, so far out. This is an EV site. Maybe someone will start “Inside AVs”?

        Discussions I don’t expect on InsideAVs would be how future rural drivers reject AI, and how lost urban access makes it moot. Since we’re talking about 10-20 years out, how will AI get around not being allowed in “the last mile”? -I don’t ponder this stuff, and I work in the city.

        1. Pushmi-Pullyu says:

          @pjwood1:

          Thank you for your thoughtful reply. It’s refreshing to have someone willing to talk about these issues in more than a superficial manner.

          “Tesla buyers are so transfixed on AI hardware potential, I think more than perspective is being lost. Just look at the Model 3??”

          Here I agree. Tesla has gotten out over its skis in designing the M3 for fully autonomous driving, when that has yet to be developed. And despite Tesla’s claims, I very seriously doubt their current hardware is up to the level of reliability necessary for full autonomous driving. As I’ve said many times, that needs active scanning in 360°, not just the front-facing radar scanners Tesla is now using.

          “Sometimes 1.5 car lengths are an invitation to be cutoff, usually not. So, what is ‘safe’ as an AI distance definition? You get that it’s directly related to harder braking and more distonce, correct?”

          Safe following distance is dependent on several variables, including road conditions, tire wear, brake effectiveness, and how smooth and even the road surface is, as well as (of course) speed. Presumably fully autonomous cars will observe their own ability to brake in a given distance, and adjust following distance accordingly. I expect autonomous cars to do much better than human drivers at adjusting speed for road conditions. It’s crazy to see people continue to tailgate when roads are wet, snowy, or icy, yet I often see exactly those insane behaviors under such adverse weather conditions. Autonomous cars will be programmed for safer driving, period. Not completely safe — that’s an impossible goal — but safer. Hopefully much safer under most conditions. Elon Musk said Tesla’s goal is a 90% reduction in accidents. That might actually be possible, and in the near future (<5 years) too.

          "There is not as much difference as you might expect between AI cars set to 'safe' distances and the way they will react, versus human drivers…"

          There I firmly disagree. Human reaction time is a large fraction of a second, or even as much as two seconds if the human isn't paying close attention. A computer's reaction time is measured in milliseconds. Much closer safe following distances will be possible under optimum conditions (i.e., good weather, dry smooth road without sharp turns) with fully autonomous vehicles than with human drivers. Autonomous vehicles don't get distracted from watching the road, don't get drunk or high or fall asleep at the wheel, and are not subject to "highway hypnosis".

          "So then, we fancy car to car communitcation, but that won’t work for another 20 years as that’s how long it takes for a fleet of U.S. autos to effectively roll to perfect harmony."

          If cars can route a cell phone call to you, then they can certainly communicate with a centralized regional automated traffic control system. Even if there are incompatible direct car-to-car communications, that shouldn't result in any serious delay as long as the car can connect with a centralized traffic control system which can then act as intermediary between the two cars which can't "talk" to each other directly.

          If one or both cars are out of range of cell towers or otherwise incapable of connecting to the regional traffic control system, they can just fall back on the routines for treating the other car as a human-driven one, to be given a wide berth.

          "And this assumes some guy with an old “AI” car isn’t gonna still drive with a broken sensor, etc. (It also assumes we want/will pay for/ AI cars..) I don’t think my view is a distortion of reality."

          You're ignoring the probability — I'd say the near-certainty — that autonomous cars will have self-testing routines they run every time the car is started, and the car will refuse to run if the tests say it's unsafe to drive. We already have EVs exhibiting that behavior; why do you doubt what we already have will continue to be used?

          "We’re going a bit crazy, to talk about this AI stuff without realizing it’s so, so far out.

          As I recall, Tesla says it's going to demonstrate coast-to-coast fully autonomous driving before the end of this year. Perhaps it's not so far away as you think?

          "This is an EV site. Maybe someone will start 'Inside AVs'?"

          I do agree there is far too much focus at InsideEVS on self-driving cars. But if that's what readers want to read about, I suppose IEVs will continue to publish articles on the subject. *Shrug* Not my decision.

          "Discussions I don’t expect on InsideAVs would be how future rural drivers reject AI, and how lost urban access makes it moot."

          Not much to discuss regarding the former. Those who want to continue to drive older, non-autonomous cars and trucks will certainly continue to do so, altho they may find themselves restricted to driving outside city centers. Is that what you mean by "lost urban access", and why would that make the subject unworthy of discussion?

          "Since we’re talking about 10-20 years out, how will AI get around not being allowed in 'the last mile'?"

          Sorry, I don't understand the question. Why wouldn't AI be allowed to drive the car the final mile to the destination? Perhaps I'm expecting more ability from a fully autonomous car than you are. When cars are fully autonomous, they should be able to "see" around them sufficiently to pick out a safe route to drive so long as there is a paved surface or even a gravel road to drive on. I also expect cars will be programmed to respond to voice commands from passengers, such as "Stop here" or "back up" or "Turn left and go ahead… keep going… now pull into that parking space on the right." We already have in-car navigation systems that respond to voice commands, so a lot of the necessary programming has already been done.

          Dirt roads will likely be more problematic than gravel roads for self-driving cars, and might well require the dirt road to be adapted to the vehicle (by installing roadside sensor reflectors) rather than the vehicle being adapted to the dirt road.

          At the worst, even under strange or difficult driving conditions, autonomous cars should be capable of having a human driver drive from point A to point B, and be able to "memorize" and retrace the route at any later date. From a computer programmer's viewpoint, that's little different from word processors that have macros which will repeat any keystrokes you enter into them, and we've had that kind of software for decades now.

          1. pjwood1 says:

            My quote: “There is not as much difference as you might expect between AI cars set to ‘safe’ distances and the way they will react, versus human drivers…”

            Part of the reason I said this is because drivers set the “AI” themselves, or will only be willing to buy cars where they can influence what this distance should be. That will perpetuate the traffic stop&go snarls. You’re banking on not just technological success, but the sponsorship by substantially every driver to purchase and sign away control. That’s a high bar.

            PP: “You’re ignoring the probability — I’d say the near-certainty — that autonomous cars will have self-testing routines they run every time the car is started, and the car will refuse to run if the tests say it’s unsafe to drive. We already have EVs exhibiting that behavior; why do you doubt what we already have will continue to be used?”

            ..because one very appeal of the EV is to get away from the engine-light. You’re saying these sensors will tether the cars to service centers. Now, Tesla, Apple, Google may go with Mr. Smith to Washington, and buy regulations, but this would then explain why I’m not on this bus. There are better ways to solve distracted driving, than imobilizing cars.

            Silicon Valley’s pathways to rent-seeking are about as abusive as the auto-industry. The opportunities here are their own full-employment act. The more we marinade in how awesome it will be to be a passenger, the more cars are built for passengers. We already see the disregard for safe (manual) driving, in how they are instrumented.

            PP: “Sorry, I don’t understand the question. Why wouldn’t AI be allowed to drive the car the final mile to the destination? ”

            Cities are reducing parking spaces, and the resulting $500, $600+ monthly parking rates have the effect of endangering the automobile before any additional regs are adopted.

            PP, I’ll probably remain biased simply because I like to drive, and continue to see it being a casualty of a different objective. There’s no reason both manual and “driver assistance” features can’t be in harmony. I thin Tesla misses this, and is positioning itself for a total ride-share network, where ownership is nearly eliminated. -that’s a very unsafe business model for at least the next 5-10 years.

  5. TM says:

    Maybe we can feed the slate of politicians into an AI engine and they can select the ones most likely to do positive things.

    Or maybe an AI Engine can run for office itself someday.

  6. Pushmi-Pullyu says:

    Makes no sense to me at all. The purpose of an air traffic controller is to maintain safe separation between airliners, by assigning them to different altitudes, directing them to change course when they get too close to each other, and (for example) putting them into holding patterns when approaching a busy airport. Airplanes should maintain a separation of minutes of flying time, except when taking off and landing, giving the air traffic controllers sufficient time to react and avoid a collision even with each controller dealing with hundreds of planes.

    I don’t see any equivalent situation in autonomous driving. If a self-driving car can’t figure out for itself what to do in a given situation, then it can either rely on previous mapping of the exact path humans have driven when driving the same route (the kind of mapping Tesla is doing, and presumably Waymo is also doing), or it can ask for a human driver to take over.

    In neither case does there seem to be advantage to having a human in the decision loop. Cars drive much too close together to allow for reaction time from a human “ground traffic controller”, and it makes no sense to hire the presumably hundreds or thousands of “human traffic controllers” that would be necessary to control a growing number of autonomous or semi-autonomous cars on the road.

    I can see the possibility of a centralized entity overseeing traffic in general for a region, re-routing traffic to avoid accidents, traffic jams and other bottlenecks to traffic control. But again, it would be far better to have “expert system” software handling such cases, since a human can only deal with one case at a time. Contrariwise, a distributed computing network can deal with them all instantly, resolving the problems in real-time faster than the blink of an eye as they occur.

    Maybe I’m missing something here. But going only on the info in this article, the situation described appears to be the sort of problem which should be handled by teams developing software, not by trying to use humans in a real-time decision-making loop as a crutch for inadequate software. Perhaps Nissan has not yet recognized the need for a computerized traffic control system for autonomous cars, a control system separate from the computers in the cars themselves, but communicating with them wirelessly. If they haven’t figured that out yet, then they need to “get with the program”. And when I say “program” I mean computer program… and not human “ground traffic controllers”.

Leave a Reply