Super Intelligence


How computers will get smart and then either kill us or make us gods... or some other thing that doesn't sound as dramatic


Posted 24 Feb 2017    Edited 26 Feb 2017


Each man below has raised a warning voice over artificial superintelligence. You can hover or click on their faces to see a thing:

Images courtesy of The Guardian, NY Times, Huffington Post, and Washington Post

To be fair, all of these guys have also outlined the enormous potential for superintelligence to herald in a new human age that could be so good we can't even imagine it; like, the end to every problem ever and the creation of impossible-to-comprehend happiness and well being.

AI Is


There's a lot of artificial intelligence being implemented right now. Like, when Facebook recognizes faces or when you do a Google reverse image search or when Teslas go on autopilot.

Artificial intelligence is basically when a computer does something that goes beyond computation and statistics and performs an action typically relegated to human thinking [like visual/speech recognition or decision making]. Artificial intelligence tools were implemented in banking as early as 1987. The reason we're only now starting to hear about AI might be due to an effect explained by Nick Bostrom:

A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore.

Perhaps also, the artificial intelligence you and I interact with on a daily basis is so clearly not on par with human intelligence. Like, the word suggestion/autocorrect that your phone keyboard gives you - it's incredible that an AI can figure out what your next likely word is based on what your previous words have been and what it's learned from your style of speech and stuff.

But then there are those moments where we're like okay...



...the phone has no idea what's going on. Just the other day, my wife and I were talking, and her iPhone all the sudden says, "Calling Ramón Hernandez" out of nowhere.

As it stands, even really advanced AIs aren't intelligent the way people are intelligent. Like, when Facebook "recognizes" a person in a photo, it's not like the computer looks at it, sees people, and then is like, "Oh, I know who that is! It's Stephen!"

The computer knows pixels in the image file only as numeric values that represent position and brightness and color and stuff. The values are analyzed as a system, with algorithms that can recognize patterns in the numbers. Like, if letters represent color {white: w, green: g, black: b, peach: p}, a white person's eye might look something like:

p p p p p p p p p p w w w g g g g g w w w p p p p p p p p p p
p p p p p p p w w w w g g g g g g g g g w w w w p p p p p p p
p p p p w w w w w g g g g b b b b g g g g w w w w w p p p p
p p w w w w w w g g g g b b b b b b g g g g w w w w w w p p
p p p p w w w w w g g g g b b b b g g g g w w w w w p p p p
p p p p p p p w w w w g g g g g g g g g w w w w p p p p p p p
p p p p p p p p p p w w w g g g g g w w w p p p p p p p p p p

But a computer doesn't actually see color, so let's take that away:

p p p p p p p p p p w w w g g g g g w w w p p p p p p p p p p
p p p p p p p w w w w g g g g g g g g g w w w w p p p p p p p
p p p p w w w w w g g g g b b b b g g g g w w w w w p p p p
p p w w w w w w g g g g b b b b b b g g g g w w w w w w p p
p p p p w w w w w g g g g b b b b g g g g w w w w w p p p p
p p p p p p p w w w w g g g g g g g g g w w w w p p p p p p p
p p p p p p p p p p w w w g g g g g w w w p p p p p p p p p p

Also, a computer only recognizes the numeric values associated with color: so, a computer will analyze something like an rgb color model, where values of red light, green light, and blue light add together to make the unique resulting color:

[255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,255,255] [255,255,255] [255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,255,255] [255,255,255] [255,255,255] [255,255,255] [000,255,000] [255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,255,255] [255,255,255] [255,255,255] [255,255,255] [255,255,255] [000,255,000] [000,255,000] [000,255,000] [255,218,185] [255,218,185] [255,255,255] [255,255,255] [255,255,255] [255,255,255] [255,255,255] [255,255,255] [000,255,000] [000,255,000] [000,255,000] [000,255,000] [255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,255,255] [255,255,255] [255,255,255] [255,255,255] [255,255,255] [000,255,000] [000,255,000] [000,255,000] [255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,255,255] [255,255,255] [255,255,255] [255,255,255] [000,255,000] [255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,218,185] [255,255,255] [255,255,255]

Still, a computer doesn't have eyes to see the positions of any of these pixel points, so it has to have some kind of position index for each pixel [shown here in parentheses]. And it will read all the data as one long train of information:

[[[255,218,185],(0,0)],[[255,218,185],(0,1)],[[255,218,185],(0,2)],[[255,218,185],(0,3)],[[255,218,185],(0,4)],[[255,218,185],(0,5)],[[255,218,185],(0,6)],[[255,218,185],(0,7)],[[255,218,185],(0,8)],[[255,218,185],(0,9)],[[255,255,255],(0,10)],[[255,255,255],(0,11)],[[255,218,185],(1,0)],[[255,218,185],(1,1)],[[255,218,185],(1,2)],[[255,218,185],(1,3)],[[255,218,185],(1,4)],[[255,218,185],(1,5)],[[255,218,185],(1,6)],[[255,255,255],(1,7)],[[255,255,255],(1,8)],[[255,255,255],(1,9)],[[255,255,255],(1,10)],[[000,255,000],(1,11)],[[255,218,185],(2,0)],[[255,218,185],(2,1)],[[255,218,185],(2,2)],[[255,218,185],(2,3)],[[255,255,255],(2,4)],[[255,255,255],(2,5)],[[255,255,255],(2,6)],[[255,255,255],(2,7)],[[255,255,255],(2,8)],[[000,255,000],(2,9)],[[000,255,000],(2,10)],[[000,255,000],(2,11)],[[255,218,185],(3,0)],[[255,218,185],(3,1)],[[255,255,255],(3,2)],[[255,255,255],(3,3)],[[255,255,255],(3,4)],[[255,255,255],(3,5)],[[255,255,255],(3,6)],[[255,255,255],(3,7)],[[000,255,000],(3,8)],[[000,255,000],(3,9)],[[000,255,000],(3,10)],[[000,255,000],(3,11)]]]

So when an AI "looks" at a picture, this is what it sees. Like, algorithms can be used to recognize apples, but they only see apples using a string of data like the one above. I'm not trying to say that computer vision is less interesting or amazing than human vision - just that it's a whole different ballgame.

The principle is true of basically all AI right now: artificial intelligence doesn't understand what it's doing. It's just a value-optimizing game applied to all kinds of information; and sometimes the result mimics some human response or skill.

Even a very powerful AI that could be developed in the future to see and talk and make decisions just as well as a person may never have any degree of consciousness or real understanding of what it's doing.

Super Human


So, developers can make AIs that can tell you what movies on Netflix you'll probably like, or what items to show you on eBay's home page based on your browsing history and all kinds of stuff like that. There are AIs that can be optimized for game learning.

One really cool example is DeepMind's AlphaGo. It's an AI built for one purpose: to play and win at Go.

Go is an ancient Chinese board game that's way, way more complicated than chess.
Computers had been built before to win at chess. It's basically a process of coding in the rules of chess and the movement of the pieces. After that, the computer can iterate through every possible combination of moves and always avoid the ones that lead to a greater chance of death and take the ones that lead to a greater likelihood of victory.

Go, though, has all these complicated strategies and nuances that aren't cut and dry like in chess. In go, players take turns placing stones on a grid. A player wins by effectively surrounding its opponent's pieces in a way that he or she can take the opponent's pieces off the board [kind of like eliminating a piece in chess, or jumping a piece in checkers].

At the beginning of the year, some AI experts were saying that building a machine that could beat a Go champion was ten years out. In march, AlphaGo beat the international champ Lee Sedol.

Instead of simply coding in the rules of Go and having the computer figure out the best strategies from purely simulated moves, AlphaGo was taught how to win by watching thousands of actual matches between professional player. After watching millions of moves, AlphaGo was left to play for hours against itself, witnessing [and learning from] hours of simulated gameplay.

You can read a step-by-step account of the game between AlphaGo and Lee Sedol here, on DeepMind's website, with commentary on critical moves by Fan Hui [a Chinese-born French player who was smashed by AlphaGo in October last year]. About halfway through the match, AlphaGo is like, "I'm like, 70% sure I'm gonna win," and that confidence level keeps climbing with every move til it's just like, "Yeah, game over, Sedol. Whenever you wanna resign, I'm ready."

Besides professional Go players, no one needs to worry about going head-to-head with AlphaGo. It's only good at Go. It's not going to beat you in Trivial Pursuit or get a perfect SAT score or anything like that. It's genius is confined to a Go board.

The real problem [or miracle, potentially] of AI is the future development of general artificial intelligence; when the computer isn't confined to movie suggestions or little games or autocorrecting your texts. If an AI can cross domains, applying what it learns in one activity to be smart about some other thing that isn't directly related [the way we humans do], we may find ourselves "in the biggest pickle... any of us has ever seen."



Like ever.

Where AI's Headed


Even with all the investment Facebook, Google, and Amazon are dumping into AI development rn, none of them are going to churn out a human-level AI anytime soon. Like, unless some radical unexpected change happens, most experts think that a high-level AI won't be developed until about 2040-50.

The problem with that estimate is the planning fallacy: basically, humans are always way too optimistic about how long it will take to accomplish things. In fact, high-level AI has been considered to be "on the horizon" or whatever at lots of points in computer science history.

In 1956, some comp sci professors got together in Hanover, New Hampshire and coined the phrase "artificial intelligence" and then were like, "Okay, let's figure out how to make a computer as smart as a person." The sentiment was that a couple of problems would need to be solved [like computer vision and hearing and decision making] and then you could just put all the pieces together and have a machine that was smart the way a person is.

Founding members of the 1956 Dartmouth Conference. Image courtesy of AI Magazine
Turns out, not that easy.

One of the great ironies was the anthropomorphising of AI the comp sci community found themselves trapped in. They figured, "Well, any idiot is able to understand speech and read handwriting and walk around, so we'll start with those tasks first. Once we've got that easy stuff down, we'll work on making the machines really good at smart-people stuff like chess and math and stuff."

It turns out that the base human abilities that we take for granted are fricking so hard to write into an algorithm. Like, getting a computer to do even one task at human-level [seeing, hearing, navigating a cluttered environment] is rough, let alone integrating all these activities into a single AI. By comparison, getting a computer to checkmate Bobby Fischer in his prime is a piece of cake.

Eventually, the difficulty in creating jaw-dropping AI became so apparent that venture capitalists and other interested parties kinda gave up on it. Academic societies [like the Dartmouth Conf guys] continued to make progress, but it was hard for them to get any funding or positive media attention. This period was called the AI winter, and it kinda came in two waves: from 1974-80 and again from 1987-93.

So, when experts express confidence that human-level AI is coming in the next forty or fifty years, we gotta take it with a grain of salt.

Given that our species has survived for over a hundred thousand years, though, we can pretty safely say that we will make it at least another hundred thousand without too much trouble. And it's not like we're going to move backwards with our knowledge of AI.

So human-level AI will be realized at some point; whether that's in twenty or two hundred years remains to be seen.

And one potentially horrifying problem with human-level AI is illustrated by Bostrom when he says, "the [intelligence] train doesn't stop at humanville station. It's likely, rather, to swoosh right by."

Superintelligence


If I were to actually write this section, I'd just end up quoting everything in Bostrom's TED presentation; so here you are. If you're trying to be quiet or something and you can't watch it, TED's got a transcription for ya here.



Bostrom has talked about how humanity's number one existential fear at any time kinda behaves as a fad: the thing we think is gonna kill us goes in and out of style. It used to be Communism, then nuclear war, then overpopulation - now climate change.

Bostrom predicts that machine superintelligence will likely be our next big fear; but that this one might actually be orders of magnitude more legitimate than our others.

The End Of


I'm Mormon. No matter how hard I try to not see things through that lens, I can't really get away from it; and I've thought a bit about what the emergence of a superintelligence would mean for my faith.

One of Mormonism's tenets is service. Like, local Mormon units [called wards] are all built on a model of altruistic time-giving work.

Say you've got an old lady in your ward who is poor and alone.

Every month, members of the ward go without eating for two meals, and donate the money in the amount of what they likely would have spent on the two meals to the bishop [leader of the ward]. The bishop uses money from that fund to help other members of the ward with financial needs. He'll take two hundred dollars from the fund, for example, and pay the old lady's utilities and stuff.

In a well-functioning ward, members of the women's organization will visit her probably once a week, and two guys from the men's organization will visit her every month, bringing food and spending some time with her.

Any time she needs yard work done, or old crap moved out of her basement, the ward will organize fifteen or twenty people to go over and help her out for a couple hours on a Saturday.

And this kind of stuff is happening constantly in a ward. Members are helping members - sometimes in a one-on-one setting, sometimes as a whole ward. There's also like ward social activities and community-level service projects and all kinds of stuff that occupy Mormons' time. And everybody does it for free.

My opinion of it: it is awesome.

The ward I'm in right now covers generally low-income families and individuals. There's always plenty to do to help people out, and the folks are way down to earth. It feels good to help people and it also feels good to get helped [my bishop basically acted as a real estate agent for my wife and me and found us an amazing place to live that we would've never known about if it hadn't been for him].

But how does this model survive in a world with superintelligence? Like, assuming the control problem is solved and AI doesn't end in global catastrophe, how would a benevolent super AI disrupt the Mormon model? Like, what's the point of me doing something to help my old lady neighbor, when her problems are likely all handled better by the AI?

Or God; what about Him?

What happens when a machine starts to embody more and more the attributes we typically attribute to God? Does it make sense to worship an unseen God when a machine starts to look more and more like Him? Or will we just forget about religion and God and everything else [at the rate that is basically happening now, or faster]?

I wonder if I'll ever have to make the decision between my faith and technological progress. So far, the church has adapted pretty well to surviving in a changing world. We'll see how it does with what will probably be the grandest, most disruptive change in human history.