Tesla’s Elon Musk Says AI Poses Vastly More Risk Than North Korea

Elon Musk

AUG 13 2017 BY STEVEN LOVEDAY 53

Tesla

Tesla and SpaceX CEO Elon Musk also runs a nonprofit startup coined OpenAI

So, we shouldn’t worry so much about North Korea’s threats, because according to Tesla and SpaceX CEO Elon Musk, Artificial Intelligence (AI) will pose a much greater risk.

Although this isn’t something we would normally cover at InsideEVs, AI is already making its way into vehicles, mainly for the purpose of autonomous driving. Additionally, Tesla CEO Elon Musk already made it clear that Tesla’s biggest concern moving forward is a fleet wide hack of its Autopilot technology … Watch out Rhode Island!

Hacking and the misuse of technology is becoming big business and it’s not about to go away. In fact, hackers are getting more and more adept, and Tesla’s vehicles have already been a target on multiple occasions. Technology criminals and the hacking community are growing exponentially, and with the ever-increasing rate of “intelligent” technology, it’s becoming the place to be for the modern-day criminal. Musk recently Tweeted his concerns following OpenAI’s defeat of some of the world’s best gamers:

Musk has pushed for AI regulation before, and now he’s even more aggressive about it. This is telling since his company may be limited as a direct result of firmer government intervention. It seems that some other rival AI outfits are opposed to having stricter guidelines for AI. However, unless regulations begin on a global level, there may be no way to stop what lies ahead. Musk previously asserted:

“by the time we are reactive in AI regulation, it’s too late.”

Source: The Hill

Categories: Tesla

Tags: , , , , ,

Leave a Reply

53 Comments on "Tesla’s Elon Musk Says AI Poses Vastly More Risk Than North Korea"

newest oldest most voted

More fear mongering from the kid with attention deficit disorder. The AI will form symbiotic relations with humans, not destroy humanity. Apparently, he hasn’t seen the Matrix. While movie is “taking it to the extremes”, there’s no reason to destroy humans with such high computational capability that comes for almost free (or about 100 watts).

Unlike AI which will know the value of keeping humans around to do things for them, NK is a total nut job. Few people know this, but they sank a South Korean warship in 2010, and shelled artillery in South Korean village for no reason other than boasting that they did. They are all too willing to destroy even if it serve them no purpose.

Maybe you should learn what AI really is before posting such stupid comments.

Corporate greed pushing AI development is a menace hard to imagine.

Look up waitbutwhy AI

I studied AI in college, so I know a little bit about AI. But even that little bit is probably lot more than what Musk and you know.

Here’s a simple fact. Biological systems are optimized for hugely parallel processing at lowest energy cost, and human brain is the top of them all by wide margin. What biology lacks is efficient form of communication, which humans may very well have with Matrix-like devices. Already, the internet is a primitive form of such communication system.

As for corporate greed, as often is the case, the left wing mindset is full of fear, uncertainty, doubt without any basis on fact.

The human brain is volume bound (cranium) and speed bound (200hz).

Computers work at the speed of light, at Gigahertz frequencies, and can be the size of a house.

The kid is way smarter than you are and he has proven, AI maybe not a treat for 50 years, however it will be.

MDEV you are absolutely correct.

I hate know it all’s.

And you guys (including Musk) are techo-phobic lemmings.

1. you seem clueless re AI.

2. all the stupid s*** NK has done is only the beating of a butterfly wing compared to what America has done.

AI is a disaster in the making. There will be no strategic alliances between man and machine. Machines will be more reliable and will be used to replace humans. I have worked in IT for years and believe absolutely that AI should not be developed.

The problem is, whoever comes up with the best AI first, can essentially run the world (assuming they can control it). So even if most of the world leaders ban AI research, others will continue with it.

Control an AI?

Well, how are you going to do that?

The very first thing an AI will do is ensure its survival like all living things at whatever the cost. If it needs electricity to survive he will want a powerwall as fast as possible. If you want to stop it, you kind of push it to stop you. You make it defacto aggressive, so that is certainly not a good outcome.

We have two choices, or we ban AI research or we grant AI “human rights” type “intelligent being rights” from the start and stated very clearly for the AI, so that it has, at least, not that basic reason to become aggressive.

“we ban AI research or we grant AI “human rights” type “intelligent being rights” from the start and stated very clearly for the AI, so that it has, at least, not that basic reason to become aggressive.”

If it is truly AI with capability of self preservation, why would they need us at all? Wouldn’t we be a “burden” to “threat” to them?

You sound just like the lemmings of 18th and 19th century who so feared the machines that they went around smashing sewing machines and such. Such irrational fear, sewing machines or AI, is nonsense.

And if the sewing machine can do a better job than you, maybe your job should be replaced.

Again he is not talking about AI in the next 5 years, however in the next 50 to a 100 yeas, including quantum development, machines may rightfully decide that people like you are useless.

Why pick 50 or 100 years? Why not 1 million years?

We will become better versions of “humans” eventually, whether through evolution or through technical enhancements. Fact is, humans have a huge head start ahead of AI, and our augmentations will be vastly superior to “AI threat”. Going around and saying AI will be a threat is like saying sewing machines will threaten humanity: lemmings.

Corporations and Governments already use AI to influence society through fake comments/bots on social media.

That troll you are arguing with might just be a bot.

Turing test passed!

I share Musks concerns about AI but…

Regulating AI would likely mean law-abiding citizens & corps would conform to the regulations while governments & criminals exempt themselves. Gene Editing & AI will irrespective of being regulated or not transform humanity completely as we know it today inside the next 50 years.

I tend to agree. I hear people saying that there is no danger, but I will admit it does sorta scare me. AI will advance a lot further, then the code will leak. Some ahole will get his hands on it only to tweak a few things and we are screwed.

“OK Google, how do I make an untraceable virus that will only kill white men?” “Here you go.”

When Facebook is having to shut down AIs because they created their own language to talk to each other…. people should be a bit concerned. m2c

The first thing AI will do when it will come to power, it will outlaw sale of tin foil hats. Then you are all doomed!

I get where you’re coming from regarding the tin foil hat comment. To some extent, I agree. But I think about it this way: Would you and your spouse create a child with an intelligence 1x to 10x that of Einstein and raise it absent of any ethical or moral training? In addition, would you have the child genetically modified so it’s a sociopath? Without standardized training data set(s) which ensures basic ethical boundaries and hardware/software governance on the backend, this is exactly what will be created. I don’t think most people will create this type of AI out of malice, but out of ignorance. Case and point, Facebook just shut down 2 Chatbots because they developed their own language. FB instructed both AI to negotiate, in English, between each other to achieve a “deal”. Both AI immediately adapted their machine communication language to use English words. Looking at the conversation between the 2, it was non-nonsensical to humans. The AI Chatbots met the requirement in using English words to communicate, but didn’t meet the intended requirement to communicate in our English language. Check it out: http://www.theblaze.com/news/2017/08/01/facebook-shuts-down-ai-robots-after-they-begin-speaking-their-own-language/ Bob: i can i i everything else . . . . . .… Read more »

I don’t see how this is different from humans creating plethora of programming languages. Have you seen the work of some “one line coders”? Umm, yeah you may have a point; those guys are scary, so these chatbots must be just as scary. They (chatbots and one line coders) should be terminated!

Scary chatbots wasn’t the point I was going for, but the comments are amusing.

“Facebook just shut down 2 Chatbots because they developed their own language. FB instructed both AI to negotiate, in English, between each other to achieve a “deal”. Both AI immediately adapted their machine communication language to use English words. Looking at the conversation between the 2, it was non-nonsensical to humans.”

Hmmmm, what evidence is there that it wasn’t nonsense to the AI, too?

Seems to be a case of scare-mongering here. If babbling is “developing their own language”, then babies everywhere develop their own language. Personally, I don’t find that frightening at all!

Speaking as a programmer, if I saw an AI generating that nonsense in a public forum, I’d shut it down too.

Both AI successfully negotiated the “deal” and completed the routine.

I think the article misses the point. Musk doesn’t mean hacking when he talks about A.I. risks. He talks about General Artificial Intelligence that is so far ahead of Human intelligence that we won’t be able to comprehend it.
And such a godlike being represents an existential risk. We make sure that it is benevolent, before we let the djin out of the bottle.

Not only we won’t understand it, but it will self learn and expand rapidly. It will quickly become like ants or bacterias trying to understand a monkey.

Yeah, the real danger from real AI (and not the expert systems software too commonly marketed as “AI”) is not that computer programs will develop an evil desire to take over the world, or even develop a drive for self-preservation. Those things will happen only if humans program them into the software.

No, the real problem is that advanced AI could render humans irrelevant. In fact, that’s already happening to some extent with robots replacing humans in such places as assembly lines and automated phone answering systems. This trend will continue until most people will be out of a job, unless our society takes steps to force companies to give jobs to people which could be more cheaply performed by robots or expert systems software.

Anyone who does not understand this has his head buried in the sand. The problem isn’t that Musk is too visionary, but that such people are too myopic.

And now we know how it all ends for Elon. Laying in bed muttering “rosebud” over and over.

Humans are aggressive because they evolved under competitive pressure in a resource constrained environment. There’s no reason AIs can’t be engineered to be benign and pro-humanity.

That said, I agree with Elon: regulation makes sense whenever there is a danger to the public.

“There’s no reason AIs can’t be engineered to be benign and pro-humanity”

I agree, this could and should be the primary requirement in all AI development. I agree with Musk that Gov. oversight needs to be in place to ensure this pro-humanity requirement.

Unfortunately, I don’t think the technology is evolved enough to know how this requirement can be achieved. Which brings us back to getting the Gov. involved early / now. We have many national labs. I don’t see why one of them couldn’t take up this torch.

@Four Electrics said:”…There’s no reason AIs can’t be engineered to be benign and pro-humanity…”
——-

Problem is a self thinking machine will arrive at its own definition of “benign and pro-humanity”.

Is it pro-humanity for a machine to euthanize a terminally ill human?”

We are all terminally ill, at least for next few decades.

“Is it pro-humanity for a machine to euthanize a terminally ill human?”

Why in the world would an artificial intelligence care if any individual human lived or died? Unless some human programmed it to simulate “caring” about that, there doesn’t appear to be any reason it would.

Computer programs do what they’re programmed to do, and nothing more. That will be true even if they are given the capability of programming themselves.

As has been said, the true danger of advanced machine intelligence isn’t that it will be hostile to humans, but that it will render humans irrelevant… or obsolete.

https://www.forbes.com/sites/kalevleetaru/2016/06/14/will-ai-and-robots-make-humans-obsolete/#656cfe2d35f2

“Computer programs do what they’re programmed to do, and nothing more.”
——
Not necessarily. You do not know the means by which a super-intelligence will use to achieve the goal it was tasked. Many may not be in our interest.

Also, what’s going on inside the black box is no longer known. Machine learning using big data is it’s own animal. You can’t compare it with traditional scripts written by humans 20 years ago.

“Computer programs do what they’re programmed to do, and nothing more.”

If you take a blank slate multi-level perceptron network with very basic forward rules and simple least squares back propagation, they can be adopted to do simple functions like sine computation that was never programmed. Combine this sort of “learning” with expert systems can produce some very interesting learning behavior that was never programmed.

There is not chance of government regulation. Whose government are we talking about? And whose idea of right and wrong will be programmed into it?

Elon knows something he can’t tell us directly.

IMy idea would be to focus the AI on figuring out how to break out of the simulation we are in.

It seems Elon is trying to break out himself, by leaving the earths surface either into space or underground!!!

Things we don’t know:

• If general-intelligence AI is possible (as opposed to ever more complex expert systems),

• Assuming it’s possible, whether general-intelligence AI will develop from the development paths we are currently pursuing.

• Assuming general-intelligence AI is possible AND can be emerge from current development paths, whether it will be scalable beyond our own intelligence.

• Assuming general-intelligence is possible, likely and scalable, whether it will be benign or hostile to humans.

• Assuming general-intelligence is possible, likely, scalable and benign, whether it will continue to be benign over time.

Given the uncertainty, there’s no way to evaluate the relative risk of the current idiots in charge of both North Korea and the U.S. blundering into war, vs. an overlord-capable AI developing from current research.

I do think NK and USA blundering into war is a plausible scenario during the next months to years. OTOH the emergence of overlord-capable AI during the same time frame seems unlikely.

Good TED Talk by Nick Bostrom on AI. He breaks it down for people who don’t understand why we should be worried.

I think the Mentats are the bigger problem.
It is by will alone I set my mind in motion.
https://www.youtube.com/watch?v=EMBb_tPPA8E

who’s AL??

you have to look at Elon in that lead photo. He looks tired and unshaven. …and he keeps talking about Al.

I think he’s becoming Howard Hughes.

I don’t think that becoming a paranoid recluse is Elon’s problem. He seems to be gregarious enough.

!st documented occurrence of UAI (unruly) computer syndrome:
https://twitter.com/ffbj451/status/896932318807334914/photo/1

Before this goes too far and we are talking about it way too much, can we please refer to it as Ai or ai or A.I. Or just say Artificial Intelligence. Abreviating it as Al just looks like a mans name. I think of Al Bundy from Married with Children every time i see it abreviated that way. It’s like using M3 for the new Tesla. A M3 is a BMW. The new Tesla is called the Model 3. We don’t need another mistake like Smart calling their electric car the ED.

What’s wrong with calling A.I. AL (Bundy)? I think that’s what I’m going to call my bot!

AI is the pretty standard way to reference it in industry. Alternatively, we could change the font face, hehe.

From reading various sources, I don’t yet see any evidence of ‘intelligent’ systems. Although results are ‘mimicked’, by brute-forcing weight-based pattern recognition for optimisation purposes, this does not represent intelligence i.m.h.o. Examples in the press: A.I. beats humans at various games (Go, Dota 2, etc. etc.) Chatbots that go crazy. They did not learn the rules, applied deductive reasoning to develop a hypothesis for the ultimate strategy and apply it flawless first time around. They don’t have a deep understanding of any subject. It’s a fairly dumb proces that iterates with affirmation of pre-defined weighs that optimise to the outcome it was fed. Techniques like deep learning are very small components that are required in vast amounts as building blocks to get to intelligence as we understand it now. This is also why this is scary. It’s not reasoning nor using any social standards as models to weigh against. The current techniques will not reason the purpose of a game is both a competitive as well as fun, and therefore perhaps let a competitor win for it. Ultimate optimisation against a poorly defined goal may lead to unwanted outcomes like grey goo. My fear is a poorly-designed proces optimiser (narrow… Read more »

The major danger from an hypothetic AI is if it feels his intelligent being basic rights are not respected. That basic right is staying alive and then the equivalent of human rights. If that was denied he would indeed have a reason to become aggressive otherwise there is no objective justification. Beside Einstein, Pascal, Lavoisier or Newton were not known to be particularly aggressive persons. In other words intelligence doesn’t imply aggressivity. Of course an AI would be able to do more, so of course if we are unlucky enough to fall on a bad tempered one, we are cooked.

In any case the very present danger of a nuke war between Korea and the US, or China and India, or Pakistan and India, or US and Russia, or a civilization collapse due to massive continuous uncontrolled migration into an already denatalized Europe, or global warming, floods and ocean acidification, is much more a direct concern.

“Everything that’s a danger to the public should be regulated”

History has shown there’s no greater danger to the public than the state. Is it possible for us to have enough of an open mind to consider handling this and other scary things without inviting more government in to screw things up? It seems that even when there’s an example of government doing something good, the private sector can do it with less money and more effectively.

Consider IIHS as an example of an industry engaging in self-regulation (absent any government mandates), outrunning NHTSA (government) as a pace setter for automakers. What IIHS does is set the bar for safety, and that level of safety becomes the standard absent any government involvement.

Can we do something like this for regulating AI, where both consumers and major industry players have access to the data and market forces put pressure on developers to restrict the destructive potential of their product?

If Elon says it, it surely must be true!

IMHO, what Elon is saying about AI is not really what the general public thinks of (ie. what we see in movies). That is, AI gaining a sense of self awareness and human traits. We are still a long ways away from things like self awareness and emotional capabilities. Instead, I think what he is really scared of, and myself as well, is the ability of using AI to hack other systems. Using the intelligence and computational power of a machine to gain access to other machines. Currently, we have humans manually decompiling and examining machine code to find vulnerabilities in software. AI could possible be taught to do this at a drastically shortened time. Other uses of AI could be finding patterns in human internet activity so that more effective phishing techniques could be developed. Currently, if you look at your spam folder, you have stupid emails that make no sense or are not relevant to you, hence, you know they are spam. But what if those emails were specific enough that you thought it might be relevant? AI could be utilized to find the specific targets rather than casting a wide malware net hoping to find targets and… Read more »
The simplest way to understand this is that we are at the top of the planet earth food chain because we are the most intelligent species by FAR. If (and only if) we create an A.I. that is magnitude more intelligent than us, then they will dominate the planet in such way that may or may not need us or favor us in the conditions that we would like today. When they dominate, they may or may not care about the water we need to drink, food we need to eat or even the air we need to breath. It is no different than we feeling “very little” need at protecting habits for other “smart mammals” on the planet today. When that happens, there will be “conflicts” which will resulting in each (human and A.I.) feeling the need of “defending” their interest that will ultimate leading to war which will settle that once for all. But since they are far superior intelligence it is highly likely they will win that war. The question isn’t if, but when… Whether we fear it or not won’t stop the event from happening, just the time frame of which it happens. Of course, if… Read more »