The real danger of artificial intelligence

pete_c

Guru
EDN Network
Richard Quinnell -October 30, 2015

In the beginning of this year several respected scientists issued a letter warning about the dangers of artificial intelligence (AI). In particular, they were concerned that we would create an AI that was able to adapt and evolve on its own, and would do so at such an accelerated rate that it would move beyond human ability to understand or control. And that, they warned, could spell the end of mankind. But I think the real danger of AI is much closer to us than that undefined and likely distant future.

For one thing, I have serious doubts about the whole AI apocalypse scenario. We are an awfully long way from creating any kind of computing system with the complexity embodied in the human brain. In addition, we don't really know what intelligence is, what's necessary for it to exist, and how it arises in the first place. Complexity alone clearly isn't enough. We humans all have brains, but intelligence varies widely. I don't see how we can artificially create an intelligence when we don't really have a specification to follow.

What we do have is a hazy description of what intelligent behavior looks like, and so far all our AI efforts have concentrated on mimicking some elements of that behavior. The results so far have offered some impressive results, but only in narrow application areas. We have chess programs that can beat Grand Masters, interactive programs that are pushing the boundaries of the Turing Test, and a supercomputer that can beat human Jeopardy champions. But nothing that can do all those and the thousands of other things a human can.

And even were we able to create something that was truly intelligent, who's to say that such an entity will be malevolent?

I think the dangers of AI are real and will manifest in the near future, however. But they won't arise because of how intelligent the machines are. They'll arise because the machines won't be intelligent enough, yet we will give control over to them anyway and in so doing, lose the ability to take control ourselves.

This handoff and skill loss is already starting to happen in the airline industry, according to this New Yorker article. Autopilots are good enough to handle the vast majority of situations without human intervention, so the pilot's attention wanders and when a situation arises that the autopilot cannot properly handle, there is an increased chance that the human pilot's startled reaction will be the wrong one.

Then there is the GIGO factor (GIGO = garbage in, garbage out). If the AI system is getting incorrect information, it is highly likely to make an improper decision with potentially disastrous consequences. Humans are able to take in information from a variety of sources, integrate them all, compare that against experience, and use the result to identify faulty information sources. AI devices are a long way from being able to accomplish the same thing, yet we're predicting the advent of fully autonomous cars by 2020. I think that's giving the AI too much control too soon.

So, no, I don't worry about a rogue AI exterminating mankind. I worry about an inadequate AI being given control over things that it's not ready for. If mankind is to be exterminated by an AI system, it won't be because of malicious intent on the part of the AI. It will be because the AI will be performing its functions exactly as designed, which is not the same as performing as intended.

What are your thoughts?

 
 
There definitely is a growing concern globally over this. A few experts have come forward to express this same concern.
 
The author of the above is being shortsighted and expressing concerns based on only his conceptual reach. The point and real AI is based on the machine being able to learn nd makes it's own decisions. Why dream there is a limit on this expansion of the machine mind to protect our insecurities? The author touched on this for praising the human intelligence but had the shortsightedness to assume this will never happen with AI because they are not good enough to be "US".  This conceited view also predominates in the article, similar to what  some religions teach. "We are the only ones." Scientific examination and exploration, without bias, is disproving this slowly, as our ignorance and conceit gets pushed aside.
 
Interesting stuff. Luckily it will be a long way into the future and regulations should be put in place to control this from happening. When this happens we should start to worry as there will then be reasons to outlaw, or even control, outcomes, demonstrating belief in it's possibility by the powers that be, at that point.
 
OTOH: American law was put in place to limit human cloning once upon a time. Obviously that hasn't worked as, not only was a human clone produced, but he was  elected as a US President, way back in history.
 
I say that life is about the journey and not the destination. What will people be doing after everything is automated. Where should it stop? I grow some tomatos in the yard because I enjoy growing and eating them and I drive old cars because I enjoy driving them and I learned to fly RC planes and heli because I was challenged by them.
 
These are small things but really.....why do so many people not want to do anything with their own hands and wits anymore? No don't answer that, I know that it's to increase have more money so they can do even less. Where is the joy in having everything done for you whether it's by another person or by a machine?  If AI takes over our lives then I would have to say that we worked hard for it and got what we deserved.
 
I'm trying to live in the moment like my dog taught me to and really don't spend much time worrying about AI.
 
Mike.
 
Pete
 
Your post has lead my mind to another question. Can AI have a bad personality? Imagine AI with a bad attitude. My wife's job involves tech IT support of electronic media marketing where their idea of AI is to tell which aisle you are standing in when you are at Home depot, your age group and race etc and then target ads to you over your phone.
 
I think that this is one annoying form of AI along with phone dialing devices. I'm not a technophobe but I do worry about how so called AI will be used in the future. Is there going to be any Artificial Ethics?
 
Mike.
 
Is there going to be any Artificial Ethics?
 
Personally I doubt it.  Machines only have what they are programmed to have. 
 
Ethics will not make money for the company automating anyways.
 
electronic media marketing where their idea of AI is to tell which aisle you are standing in when you are at Home depot
 
Yup; here have noticed the beginnings of this with motion activated LCD displays next to new promo's stuff.  I do not keep my smart phone on though and have not or do not download much if any of the offered widgets (that and have switched over to MS Mobile now).
 
Almost all of the big box stores around here use self checkout and it works fine for me.  It is getting better and eventually automate that whole process.
 
It has been a few years now here where I got involved in the self service kiosk thing.  Initially it was domestic then internationally.
 
This endeavor switched your endeavors to a machine preconfigured with only certain functions.  Around the same time tested functionality using biometrics and it worked fine. (did this for government office stuff).
 
It has been a few years and we did test using RFID tagged customers (today it would be your smart phone).  You can do either either way.  It was presented as a benefit to the customer and basically just stuck you in an awareness database to provide you with personal services; like magic; it worked fine.  BTW also utilize RFID tagging for a gas and oil company large facility in a 3rd world country.  Worked great.
 
I would say soon these machines will have a little AI personality that will address you by name and interact with you like a person. 
 
Thinking today McDonald's is going in that direction.  A few years back in the UK noticed that one lunch visit restaurant was totally automated such that there was only an IT overseer watching what was going on (well and they really didn't need one person).  It was a bit vending machine like but there were no vending machines in place.
 
Wife has been involved in the banking industry now for a bit over 30 years.  Most of the personal banking now is automated with a bit of personal control (banks do not even trust their own employees accessing your account anymore).
 
The term artificial intelligence is thrown around a lot, I wonder what is the definition of Artificial Intelligence.
 
A kiosk doesn't think or learn so it is just a machine performing a fixed set of functions. Even flying an airplane is reading exterior influences (input) and performing tasks based on the input (output) so isn't that just a computer program? It is certainly automation but is it intelligent? The software that played the game show Jeopardy was learning answers as it went along so maybe it was intelligent? Would it's collected knowledge having played the game for several years make it intelligent of just a large volume of information?
 
I like playing devil's advocate so I ask - are we just computers that go through our lives collecting information and referring back to it when we need to make a decision? If that's true then I would think that we should be able to make very good electronic versions of ourselves before too long. I can't put my finger on it but I think and hope that there is more to it than that.
 
EDIT
 
the part about adapting and evolving on it's own from your original post is what scares me and I hope that never happens.
 
AI - Artificial Intelligence is: (well the wiki or Alexa or smartphone states)
 
Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is also the name of the academic field of study which studies how to create computers and computer software that are capable of intelligent behavior. Major AI researchers and textbooks define this field as "the study and design of intelligent agents", in which an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1955, defines it as "the science and engineering of making intelligent machines".
 
On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov. In February 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin. The Kinect, which provides a 3D body–motion interface for the Xbox 360 and the Xbox One, uses algorithms that emerged from lengthy AI research as do intelligent personal assistants in smartphones.
 
The OP states I worry about an inadequate AI being given control over things that it's not ready for.
 
It already is and folks are / have already adapted and evolving to it.
 
Personally many folks know too there are Wiki articles that are mostly fictional stories; that could be history or whatever. 
 
Many folks believe whatever they read on the internet.
 
This world is sorely lacking on intelligence (Present company excluded), if we have to invent some to boost the average I say go for it. :)
 
Will artificial intelligence replace humans?
 
poll-0.jpg
 
poll-1.jpg
 
It's only a matter of time.
 
We have discovered no upper limit to the processing and problem solving capabilities of machine intelligences. Given the current rate of development of computer technology, I think it reasonable to expect the arrival of machine intelligences on par with humans in the next 50 years. After reaching the critical point where an AI can evaluate and improve its own systems, all bets are off as to the heights it could ascend to. But will AI replace humans? There are a few ways this could happen. First, an AI could decide to eliminate humanity as a means to an end or as an end in itself. Second, one or more AIs could survive the extinction of the human race due to their ability to survive and self-replicate anywhere, including the vacuum of space. Third, and in my opinion least likely, humanity could decide to stop self-replicating and leave the earth to AI, as an attempt to overcome the inevitable mortality, pettiness, and strife that plagues us.
 
Please note that post #1 titled The real danger of artificial intelligence above (OP) is relating to AI  - Artificial Intelligence.
 
The OP is not about clones nor is the original post have anything to do with politics.
 
But will AI replace humans?
pete_c said:
Second, one or more AIs could survive the extinction of the human race due to their ability to survive and self-replicate anywhere, including the vacuum of space.
 
I like the odds of this one. We do seem more than willing to risk our environment and and maybe ultimately our ability to live on earth in our pursuit of energy to fuel our machines. Most of us would rather throw grandma under the bus than give up the remote control.
 
Mike.
 
It doesn't take AI to replace humans. Simple machines have replace many functions, More advanced machines have replaced many hard manual labour jobs of humans. Computers have replaced many repetitive and/or complex human brain tasks.
 
I think it would be safe to say as computer brains advance that we should see more functions of humans replaced. How many or too what extent? I think that depends on how far into the future we go.
 
As far as clones go, the two technologies may be integrated to form machine based androids with human or animal brains that cannot be reproduced with the technology at that time. A machine to support a cloned human brain may be the first big jump into real AI. Trump that! :)
 
Yup; the door to AI has been opened; slightly, accepted and appearing ever so slowly. 
 
Ding dong logarithms are much faster today; well in the blink of an eye of a passing moment.
 
This is only a movie preview (Knock Knock (2015)) with the subtle center of attention being a smart phone.  (a bit different than the movie Her (2013)) - it is only a movie preview.
 
Why Knock Knock?
 
[youtube]http://youtu.be/ti6S3NZ5mKI[/youtube]
 
Have you seen the latest Terminator movie?
 
They describe the history of how Gentek (sp?) started and eventually developed into Skynet that took over the world.
 
The description of the big "G" was exactly what Google does today. :)
I had to laugh out loud in the theatre. It was a little scary though.
 
Have you seen the latest Terminator movie?
 
Fell asleep.  I did not fall asleep for the first one in the series many many years ago.
 
Saw a movie called Meru (2015) recently (dinner and a movie night).  Never closed my eyes watching it. 
 
It is a documentary unrelated to AI.
 
I understand what the OP is about and aware of it but do not lose sleep over it.
 
Today here on the forum we write about what we automate and how we automate and argue about what is the best means of automation using whatever.  
 
I brought up AI seeing just some text in my email in box and thought it would generate some lively discussions here on Cocoontech.
 
Conceptually any machine that automates a human process can be considered synthetic man made intelligence from the most simple decision tree stuff to the most complex.
 
When said enitity of synthetics creates it's own if and when it does the base is still man made layers in.
 
AI is fluid and dynamically changing getting closer to mimicking a human thought process and your average person today accepts it without a much of a gander.
 
Unrelated my 40 year old parrot (she is not a machine) mimics a number of voices / words that she has heard in her 40 years of life.
 
She appears to me to be making some sort of sense when when we are eating dinner and she clangs her cage with her beak saying just the word food over and over again or many times during our dinner she just eats and drinks from her food bowls quietly not making any fuss.
 
Back
Top