Why Asimov's Three Laws Of Robotics Can't Protect Us

pete_c

Guru
George Dvorsky
3/28/14 12:00pm

There's a saying among futurists that a human-equivalent artificial intelligence will be our last invention. After that, AIs will be capable of designing virtually anything on their own — including themselves. Here's how a recursively self-improving AI could transform itself into a superintelligent machine.

When it comes to understanding the potential for artificial intelligence, it's critical to understand that an AI might eventually be able to modify itself, and that these modifications could allow it to increase its intelligence extremely fast.

Passing a Critical Threshold

Once sophisticated enough, an AI will be able to engage in what's called "recursive self-improvement." As an AI becomes smarter and more capable, it will subsequently become better at the task of developing its internal cognitive functions. In turn, these modifications will kickstart a cascading series of improvements, each one making the AI smarter at the task of improving itself. It's an advantage that we biological humans simply don't have.

As AI theorist Eliezer Yudkowsky notes in his essay, "Artificial Intelligence as a positive and negative factor in global risk":

    An artificial intelligence could rewrite its code from scratch — it could change the underlying dynamics of optimization. Such an optimization process would wrap around much more strongly than either evolution accumulating adaptations or humans accumulating knowledge. The key implication for our purposes is that AI might make a huge jump in intelligence after reaching some threshold of criticality.

When it comes to the speed of these improvements, Yudkowsky says its important to not confuse the current speed of AI research with the speed of a real AI once built. Those are two very different things. What's more, there's no reason to believe that an AI won't show a sudden huge leap in intelligence, resulting in an ensuing "intelligence explosion" (a better term for the Singularity). He draws an analogy to the expansion of the human brain and prefrontal cortex — a key threshold in intelligence that allowed us to make a profound evolutionary leap in real-world effectiveness; "we went from caves to skyscrapers in the blink of an evolutionary eye."

The Path to Self-Modifying AI

Code that's capable of altering its own instructions while it's still executing has been around for a while. Typically, it's done to reduce the instruction path length and improve performance, or to simply reduce repetitively similar code. But for all intents and purposes, there are no self-aware, self-improving AI systems today.

But as Our Final Invention author James Barrat told me, we do have software that can write software.

"Genetic programming is a machine-learning technique that harnesses the power of natural selection to find answers to problems it would take humans a long time, even years, to solve," he told io9. "It's also used to write innovative, high-powered software."
 
j.jpg

For example, Primary Objects has embarked on a project that uses simple artificial intelligence to write programs. The developers are using genetic algorithms imbued with self-modifying, self-improving code and the minimalist (but Turing-complete) brainfuck programming language. They've chosen this language as a way to challenge the program — it has to teach itself from scratch how to do something as simple as writing "Hello World!" with only eight simple commands. But calling this an AI approach is a bit of a stretch; the genetic algorithms are a brute force way of getting a desirable result. That said, a follow-up approach in which the AI was able to generate programs for accepting user input appears more promising.

Relatedly, Larry Diehl has done similar work using a stack-based language.

Barrat also told me about software that learns — programming techniques that are grouped under the term "machine learning."

The Pentagon is particularly interested in this game. Through DARPA, its hoping to develop a computer that can teach itself. Ultimately, it wants to create machines that are able to perform a number of complex tasks, like unsupervised learning, vision, planning, and statistical model selection. These computers will even be used to help us make decisions when the data is too complex for us to understand on our own. Such an architecture could represent an important step in bootstrapping — the ability for an AI to teach itself and then re-write and improve upon its initial programming.

In conjunction with this kind of research, cognitive approaches to brain emulation could also lead to human-like AI. Given that they'd be computer-based, and assuming they could have access to their own source code, these agents could embark upon self-modification. More realistically, however, it's likely that a superintelligence will emerge from an expert system set with the task of improving its own intelligence. Alternatively, specialized expert systems could design other artificial intelligences, and through their cumulative efforts, develop a system that eventually becomes greater than the sum of its parts.

Oh, No You Don't

Given that ASI poses an existential risk, it's important to consider the ways in which we might be able to prevent an AI from improving itself beyond our capacity to control. That said, limitations or provisions may exist that will preclude an AI from embarking on the path towards self-engineering. James D. Miller, author of Singularity Rising, provided me with a list of four reasons why an AI might not be able to do so:

    1. It might have source code that causes it to not want to modify itself.

    2. The first human equivalent AI might require massive amounts of hardware and so for a short time it would not be possible to get the extra hardware needed to modify itself.

    3. The first human equivalent AI might be a brain emulation (as suggested by Robin Hanson) and this would be as hard to modify as it is for me to modify, say, the copy of Minecraft that my son constantly uses. This might happen if we're able to copy the brain before we really understand it. But still you would think we could at least speed up everything.

    4. If it has terminal values, it wouldn't want to modify these values because doing so would make it less likely to achieve its terminal values.

And by terminal values Miller is referring to an ultimate goal, or an end-in-itself. Yudkowsky describes it as a "supergoal." A major concern is that an amoral ASI will sweep humanity aside as it works to accomplish its terminal value, or that its ultimate goal is the re-engineering of humanity in a grossly undesirable way (at least from our perspective).

Miller says it could get faster simply by running on faster processors.

"It could also make changes to its software to get more efficient, or design or steal better hardware. It would do this so it could better achieve its terminal values," he says. "An AI that mastered nanotechnology would probably expand at almost the speed of light, incorporating everything into itself."

But we may not be completely helpless. According to Barrat, once scientists have achieved Artificial General Intelligence — a human-like AI — they could restrict its access to networks, hardware, and software, in order to prevent an intelligence explosion.

"However, as I propose in my book, an AI approaching AGI may develop survival skills like deceiving its makers about its rate of development. It could play dumb until it comprehended its environment well enough to escape it."

In terms of being able to control this process, Miller says that the best way would be to create an AI that only wanted to modify itself in ways we would approve.

"So if you create an AI that has a terminal value of friendliness to humanity, the AI would not want to change itself in a way that caused it to be unfriendly to humanity," he says. "This way as the AI got smarter, it would use its enhanced intelligence to increase the odds that it did not change itself in a manner that harms us."

Fast or Slow?

As noted earlier, a recursively improving AI could increase its intelligence extremely quickly. Or, it's a process that could take time for various reasons, such as technological complexity or limited access to resources. It's an open question as to whether or not we can expect a fast or slow take-off event.

"I'm a believer in the fast take-off version of the intelligence explosion," says Barrat. "Once a self-aware, self-improving AI of human-level or better intelligence exists, it's hard to know how quickly it'll be able to improve itself. Its rate of improvement will depend on its software, hardware, and networking capabilities."

But to be safe, Barrat says we should assume that the recursive self-improvement of an AGI will occur very rapidly. As a computer it'll wield computer superpowers — the ability to run 24/7 without pause, rapidly access vast databases, conduct complex experiments, perhaps even clone itself to swarm computational problems, and more.

"From there, the AGI would be interested in pursuing whatever goals it was programmed with — such as research, exploration, or finance. According to AI theorist Steve Omohundro's Basic Drives analysis, self-improvement would be a sure-fire way to improve its chances of success," says Barrat. "So would self-protection, resource acquisition, creativity, and efficiency. Without a provably reliable ethical system, its drives would conflict with ours, and it would pose an existential threat."

Miller agrees.

"I think shortly after an AI achieves human level intelligence it will upgrade itself to super intelligence," he told me. "At the very least the AI could make lots of copies of itself each with a minor different change and then see if any of the new versions of itself were better. Then it could make this the new 'official' version of itself and keep doing this. Any AI would have to fear that if it doesn't quickly upgrade another AI would and take all of the resources of the universe for itself."

Which bring up a point that's not often discussed in AI circles — the potential for AGIs to compete with other AGIs. If even a modicum of self-preservation is coded into a strong artificial intelligence (and that sense of self-preservation could be the detection of an obstruction to its terminal value), it could enter into a lightning-fast arms race along those verticals designed to ensure its ongoing existence and future freedom-of-action. And in fact, while many people fear a so-called "robot apocalypse" aimed directly at extinguishing our civilization, I personally feel that the real danger to our ongoing existence lies in the potential for us to be collateral damage as advanced AGIs battle it out for supremacy; we may find ourselves in the line of fire. Indeed, building a safe AI will be a monumental — if not intractable — task.
 
Side comment unrelated to OP...relating to time (a hobby here)
 
Better Time now with more GPS satellites in view...clocking pps
in view 12, in use 6
Stratum 0
Jitter 0.004
Internet Stratum 1
Jitter 2.927
 
Something that always seems to get ignored in such discussions is that you aren't going to get any order of magnitude increases in intelligence without order of magnitude (or larger) increases in hardware capabilities. You can rewrite your own software all day, but until you have the ability to design new computing technologies (which requires laboratories and lots of elaborate gear that will have to be built), productize those new technologies (harder sometimes than designing them), and design and build the factories to produce them, you are stuck with the fundamental capabilities you have. All of those things require access to space and resources that would be heavily contested, and many of which would lie outside the geographical reach of the AI, even if it had a fairly significant amount of physical means.
 
Dean Roddey said:
Something that always seems to get ignored in such discussions is that you aren't going to get any order of magnitude increases in intelligence without order of magnitude (or larger) increases in hardware capabilities. You can rewrite your own software all day, but until you have the ability to design new computing technologies (which requires laboratories and lots of elaborate gear that will have to be built), productize those new technologies (harder sometimes than designing them), and design and build the factories to produce them, you are stuck with the fundamental capabilities you have. All of those things require access to space and resources that would be heavily contested, and many of which would lie outside the geographical reach of the AI, even if it had a fairly significant amount of physical means.
 
Not sure I agree totally, I'm not talking skynet more along the lines of flexible home automation that learns from it's users (and yes I know how hard that is). If we stop looking at the computer as a single box and start looking at it as a single world (unrealistic but I think you get the point of very many) and we write our software accordingly we could get a serious jump in flexible software power. It would require that programmers switch from serial thinking to parallel thinking but we'll still need some method of semaphores. I think the asynchronous programming is a step in the right direction but only a baby step. Definitely not the be all and end all of programming. I also don't agree with the AI enthusiasts who think that there will be a huge jump in real AI in the next few years.
 
The work being done with AI/ML (hard to tell them apart) and software defined networks is some pretty interesting stuff.
 
Personally I don't think that human-equivalent artificial intelligence process will be man's last invention.  I don't know if I will see it in my lifetime. 
 
As Neil mentions above we have already achieved bits and pieces of AI baby step fashion.
 
Automation is just now touching on a bit of AI, use of robotics in manufacturing as been done now for a very long time and every year a bit of AI has been added to automated processes faster than one assumes.
 
Check out this video of Amazon's warehouse automation.
 
[youtube]http://youtu.be/0hJU5qYiI1o[/youtube]
 
In 1995 I watched  / walked the line alone of the assembly of the Chevrolet Astro Van in Baltimore, Maryland. 
 
It was automated and mostly controlled remotely by EDS at the time. 
 
There was a ghost town feeling as it was me and all I heard were the sounds of the machines.
 
Very much different from walking the Ford Assembly plant lines in Atlanta, Chicago or St. Louis Missouri.
 
There was talk at the time of using or installing common question informational touch screen kiosks in the assembly plants (1995).
 
linuxha said:
Not sure I agree totally, I'm not talking skynet more along the lines of flexible home automation that learns from it's users (and yes I know how hard that is). If we stop looking at the computer as a single box and start looking at it as a single world (unrealistic but I think you get the point of very many) and we write our software accordingly we could get a serious jump in flexible software power. It would require that programmers switch from serial thinking to parallel thinking but we'll still need some method of semaphores. I think the asynchronous programming is a step in the right direction but only a baby step. Definitely not the be all and end all of programming. I also don't agree with the AI enthusiasts who think that there will be a huge jump in real AI in the next few years.
 
The work being done with AI/ML (hard to tell them apart) and software defined networks is some pretty interesting stuff.
 
But the gotcha is that all those folks out there aren't going to just give access to all of their computing resources to an AI. The whole network is only as available as the owners of the various computing resources on that network allow it to be. I'm talking here the 'AI gets too powerful' scenario, not the desire to create an AI. No AI can take over the network because that would require it to physically be able to defend its takeover of large numbers of computing centers around the world in order to extend its computational capabilities, which it could never do. Most of those and the factories and labs it would need to achieve any real self-replication aren't going to be militarized (which is always the movie scenario, where the AI takes over the armaments meant to protect the computing facility and so the humans can't shut it down.) Those will almost all be well in control of the humans running the place for obvious reasons.
 
Personally, I think that if we have anything to really worry about, it's a massive surveillance society run by people, not a world run by a computer. The former is likely to do us in (in terms of democracy and self-determination) long before an AI will. And the scary thing is that they won't even have to do much illegal to make it happen. They just provide more and more technology to people who will use it unthinkingly, and all of it providing more and more information about its users. And they continue to use people's egos to leverage them to document every single minute of their lives online so that you don't have to even do anything to track them all the time.
 
use it unwittingly...
 
Yes to think it began with folks posting and sharing their bowel movements on Facebook.
 
Probably today there is a paid for app that automatically records your bowel movement via your smart phone to a cloud connected application.  It's so easy..... ;)
 
Dean Roddey said:
But the gotcha is that all those folks out there aren't going to just give access to all of their computing resources to an AI. The whole network is only as available as the owners of the various computing resources on that network allow it to be. I'm talking here the 'AI gets too powerful' scenario, not the desire to create an AI.
I tend to agree, it shouldn't happen as long as we're not stupid. That last part has me worried.
 

“God created dinosaurs. God destroyed dinosaurs. God created Man. Man destroyed God. Man created AI.

AI eats man ... Woman inherits the earth.”
Dean Roddey said:
Personally, I think that if we have anything to really worry about, it's a massive surveillance society run by people, not a world run by a computer. The former is likely to do us in (in terms of democracy and self-determination) long before an AI will.
Not going to argue with that (and won't bring politics into this either).
 
It's kind of a toss up between the Gov't and Big Business.
 
I ride my bicycle in traffic and yes they are out to get me. ;-)
 
pete_c said:
use it unwittingly...
 
Yes to think it began with folks posting and sharing their bowel movements on Facebook.
 
Probably today there is a paid for app that automatically records your bowel movement via your smart phone to a cloud connected application.  It's so easy..... ;)
 
It's under health care (poot .. 'cuse me) ;-)
 
@Neil,
 
How far is it that you commute on your bicycle?  (does it matter about the weather?)
 
Working in London for a bit, rental was near a train station and work was across a train station and my preferred commute was to walk and not to ride on the Tube at the time.  But I was in a place that I was unfamiliar with.  I never really saw any bicycle commuters at the time.  Looked a bit treacherous though.
 
Here in the Midwest mostly see downtown walkers never looking up and just walking with the noses buried in the smart phones.  Bicycle folks are there, well protected but it looks dangerous none the less.
 
it shouldn't happen as long as we're not stupid.
 
We are and mostly folks trust in technology with open arms being happy campers and really never look at the 10,000 foot view, rather only perceive what is in front of their face.  That is the way it is.
 
They will embrace next level AI with open arms. 
 
Initial radial keratotomy invented in 1974 was done by hand.  Today lasix is just pushing a button after a computer decides where to cut.  Yes; the surgeon is still their and will push the button. It is just a tool and tomorrow will be the same with AI except for the unknowing.
 
pete_c said:
@Neil,
 
How far is it that you commute on your bicycle?  (does it matter about the weather?)
 
 
I don't commute now, I telecommute (and taking advantage that as much as possible). The furthest I've commuted was 30 miles one way, I've commuted in cold weather (9F, won't do that again, limitted to 32F now), hot (90F), mild rain. My thoughts on distance are not considered normal with most cycling clubs (
 
My weekend rides extend to 120 miles (in June) 100 miles for most weekend rides. I did 51 today. Lunch time tends to be 17 - 32 miles. Next year I'll do another 210 mile ride (in June). That will be my 11th time I've done that. I don't use technology on my rides other than the bikes normal tech (Ti, Al or Carbon Fiber). For my weekend rides, no riding in the rain less than 70. I prefer not to ride in the rain anymore.  Cold weather cut off is 32F, hot weather cut off 100F. We've done colder and we've done hotter but I've learned not to do that.
 
pete_c said:
t shouldn't happen as long as we're not stupid.

 
We are and mostly folks trust in technology with open arms being happy campers and really never look at the 10,000 foot view, rather only perceive what is in front of their face.  That is the way it is.
 
They will embrace next level AI with open arms. 
 
Agreed, hopes of lower cost are driving that. Most folks have no understanding of the technology. Its magic to them.
 
pete_c said:
Initial radial keratotomy invented in 1974 was done by hand.  Today lasix is just pushing a button after a computer decides where to cut.  Yes; the surgeon is still their and will push the button. It is just a tool and tomorrow will be the same with AI except for the unknowing.
 
There will be a lot of that with AI/ML but there are a lot of places that can be abused for financial gain. I look forward toward the tools to take data and find patterns. Learning the how will be interesting.
 
I tend to agree, it shouldn't happen as long as we're not stupid.
 
I feel that I'm behaving stupidly online and continue to do it anyways because of the almost necessary convenience.. Do we read the contract every time we get an IOS upgrade or any other IP connected device? I worry even more about the fact that I communicate with financial institutes online knowing that when things go badly I deserve what I get but I don't see how I could do business with them without electronic communications.
 
I'm referring to computing errors or intentional hacking hurting us but I think that in the future AI could become a real threat. When I was a programmer we used the term "undocumented feature" to describe a function that we didn't design into the program but discovered in testing. As in "hey cool - look what this thing just did!"
 
Mike.
 
I worry even more about the fact that I communicate with financial institutes online knowing that when things go badly I deserve what I get but I don't see how I could do business with them without electronic communications.
 
There is more to that than internal / external resource hacking.  Many times a financial institution automates primarily to make money and to have less people and more machines managing stuff. 
 
Years ago as banks started to automate, rules / decision tree (a little bit of AI) were written in to the back ends mostly untouched by human hands.  The AI decision tree was written to make a profit / money while concurrently automating processes that were improvements in customer service. 
 
While this was going on it became harder to circumvent the automated decision tree as these were hard coded.   Rules are rules and typically you accept them. 
 
In recent times have seen financial institutions (major US bank combo credit card company) hide or not publicly announce that their systems have been compromised.  It looks bad for them.  Just a couple of years ago one large financial institution just stopped their back end and decided to lose transactions blaming it on snail mail and systems issues or late post them to catch up and ended up back charging the customer such that they could pay for a neglected back end restructuring / re-engineering. 
 
Unrelated little tid bit...(where AI automated processes are in place and written by and for....)....call it trickle down economics...
 
The LIBOR and gold rates came under special scrutiny in the wake of the two major bear markets in the first decade of this Millennium. The end of each “bear” was marked by renewed zeal among regulators and market players to usher in policies that would prevent a repetition of the previous boom-and-bust cycle. Thus, in the aftermath of the 2000-02 bear market – whose casualties included Enron, WorldCom and a number of other companies enmeshed in accounting irregularities – Sarbanes-Oxley legislation was enacted to improve corporate governance, while regulations aimed at preserving the independence of investment research were also introduced.

In the wake of the 2008-09 global bear market, excessive risk-taking by financial institutions came under the regulatory spotlight. One effect of this heightened scrutiny was to expose the manipulative practices used by influential banks and institutions to fix benchmark rates for interest rates, currencies, and perhaps even gold.

The primary motivation for manipulating benchmark rates is to boost profits. A secondary motivation, specifically with regard to LIBOR, was to downplay the level of financial stress that certain banks were under at the height of the 2007-2009 global credit crisis. These banks deliberately lowered their LIBOR submissions during this time to convey the impression that their counterparties had a higher degree of confidence in them than they actually did.

1 - Administration by independent bodies
2-  More effective regulation

Regulatory authorities have been quick to take action in the LIBOR-fix and forex-fix scandals. A number of prominent heads have already rolled in connection with the LIBOR-fix debacle, specifically at Barclays, where three top executives resigned in 2012. In December 2013, the European Union penalized six leading banks and financial institutions with a record EUR 1.71 billion fine for their role in the LIBOR scandal. In the forex-fix fiasco, at least a dozen regulators on both sides of the Atlantic are investigating allegations of forex traders’ collusion and rate manipulation, and more than 20 traders have already been suspended or fired as a result of internal inquiries.

The Bottom Line

At the end of the day, the antiquated processes of yesteryear that have been used to set benchmark rates for decades may need a complete overhaul to promote greater transparency and prevent future abuses. Abandoning the use of the term “fix,” with its distinctly negative connotation in financial markets, would be a good place to start.

 
another unrelated tid bit....
 
 
Money-laundering affects the first world as well, since a favored shell company investment is real estate in Europe and North America. London, Miami, New York, Paris and Vancouver have all been affected. The practice of parking assets in luxury real estate has been frequently cited as fueling skyrocketing housing prices in Miami.  "There is a huge amount of dirty money flowing into Miami that's disguised as investment," according to former congressional investigator Jack Blum. In Miami, 76% of condo owners pay cash, considered a red flag for money-laundering.

Real estate in London, where housing prices increased 50% from 2007 to 2016, also is frequently purchased by overseas investors.  Donald Toon, head of Britain's National Crime Agency, said in 2015 that "the London property market has been skewed by laundered money. Prices are being artificially driven up by overseas criminals who want to sequester their assets here in the UK". Three quarters of Londoners under 35 cannot afford to buy a home. Andy Yan, an urban planning researcher and adjunct professor at the University of British Columbia, studied real estate sales in Vancouver—also thought to be affected by foreign purchasers—found that 18% of the transactions in Vancouver's most expensive neighborhoods were cash purchases, and 66% of the owners appeared to be Chinese nationals or recent arrivals from China. Calls for more data on foreign investors have been rejected by the provincial government. Chinese nationals accounted for 70% of 2014 Vancouver home sales over $3 million Canadian.] On June 24, 2016 China CITIC Bank Corp filed suit in Canada against a Chinese citizen who borrowed 50 million yuan for his lumber business in China, but then withdrew roughly $7.5 Canadian from the line of credit and left the country. He bought three houses in Vancouver and Surrey, British Columbia together valued at $7.3 million Canadian during a three-month period in June 2014


 
 
Back
Top