Sam Harris – Can we build AI without losing control over it?

Sam Harris – Can we build AI without losing control over it?

Let’s talk about a failure of intuition that many of us suffer from, it’s really a failure to detect a certain kind of danger. I’m going to describe a scenario that I think is both terrifying and likely to occur. And that’s not a good combination, as it turns out, and yet, rather than be scared, most of you will feel that what I’m talking about is kind of cool. I’m going to describe how the gains we make in artificial intelligence could ultimately destroy us. And in fact, I think it’s very difficult to see how they won’t destroy us, or inspire us to destroy ourselves. And yet, if you’re anything like me, you’ll find that it’s fun to think about these things. And that that response is part of the problem because that response should worry you

If I were to convince you in this talk, that we were likely to suffer a global famine, either because of climate change or some other catastrophe, and that your grandchildren or their grandchildren are very likely to live like this. You wouldn’t think.

Famine isn’t fun.

Death by science fiction, on the other hand, is fun. And one of the things that worries me most about the development of AI at this point is that we seem unable to marshal and appropriate emotional response to the dangers that lie ahead. I’m unable to marshal this response, and I’m giving this talk.

It says always stand before two doors. behind door number one, we stopped making progress in building intelligent machines, or computer hardware and software. It just stops getting better for some reason. But take a moment to consider why this might happen. But given how valuable intelligence and automation are

We will continue to improve our technology if we are all able to what could stop us from doing this?

A full scale nuclear war, a global pandemic, an asteroid impact. Justin Bieber becoming president united states.

The point is something would have to destroy civilization as we know it. You have to imagine how bad it would have to be to prevent us from making improvements in our technology permanently, generation after generation. almost by definition, this is the worst thing that’s ever happened in human history.

So the only alternative and this is what lies behind door number two is that we continue to improve our intelligent machines year after year after year. At a certain point, we will build machines that are smarter than we are. And once we have machines that are smarter than we are

They will begin to improve themselves.

And then we risk with the mathematician ij. Good called an intelligence explosion that the process could get away from us. Now, this is often caricatured, as I have here as a fear that armies of malicious robots will attack us. But that isn’t the most likely scenario. It’s not that our machines will become spontaneously malevolent. Because concern is really that we will build machines, there’s so much more competent than we are that the slightest divergence between their goals and our own could destroy us.

Just think about how we relate to ants. Okay, we don’t hate them. We don’t go out of our way to harm them. In fact, sometimes we take pains not to harm them, we just we step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, let’s say when constructing a building like this one, we annihilate them without a qualm. Okay, the concern is that we will one day build machines that

Whether they’re conscious or not, could treat us with similar disregard.

Now, I suspect this seems far fetched to many of you.

I bet they’re those of you who doubt that super intelligent AI is possible, much less inevitable. But then you must find something wrong with one of the following assumptions. And there are only three of them.

Intelligence is a matter of information processing in physical systems. Actually, this is a little bit more than an assumption. But we have already built narrow intelligence into our machines. And many of these machines perform at a level of superhuman intelligence already. And we know that mere matter can give rise to what is called general intelligence and ability to think flexibly across multiple domains. Because our brains have managed it, right. I mean,

there’s just atoms in here, as long as we continue to build

systems of atoms that display more and more intelligent behavior. We will eventually, unless we are interrupted, we will eventually build general intelligence into our machines, it’s crucial to realize that the rate of progress doesn’t matter. It has any progress is enough to get us into the end zone. We don’t need Moore’s law to continue. We don’t we don’t need exponential progress, we just need to keep going.

The second assumption is that we will keep going, we will continue to improve our intelligent machines.

And given the value of intelligence, intelligence is either the source of everything we value, or we need to safeguard everything we value it is our most valuable resource. So we want to do this but we have problems that we desperately need to solve. We want to cure diseases like Alzheimer’s and cancer. Do you want to understand economic systems want to improve our climate science so we will do this

If we can, the train is already out of the station and there’s no brake to fall.

Finally, we don’t stand on a peak of intelligence or anywhere near it, likely. And this really is the crucial insight. This is what makes our situation so precarious. And this is what what makes our intuitions about risk. So unreliable.

They’re just consider the smartest person who’s ever lived.

And almost everyone’s shortlist here is john von Neumann the impression that phenomen made on the people around him, and this included the greatest mathematicians and physicists of his time is fairly well documented. If If only half the stories about him are half true, that no question he’s one of the smartest people has ever lived. So consider the spectrum of intelligence. Here we have john mahnomen.

And then we have you and me

and then we have a chicken

sorry,

Chicken.

No reasons may make this talk more depressing than it needs to be.

It seems overwhelmingly likely, however, the spectrum of intelligence extends much further than we currently can see. And if we build machines that are more intelligent than we are, they will very likely explore the spectrum in ways that we can’t imagine and exceed us in ways that we can’t imagine.

And it’s important to recognize that this is true by virtue of speed alone, right? So imagine we just built

a super intelligent AI, right? That was no smarter than your average team of researchers at Stanford or MIT. Well, electronic circuits function about a million times faster than biochemical ones. Okay, so this machine should think about a million times faster than the minds that built it. So you said it running for a week, and it will perform 20,000 years of human level intellectual work, week after week after week.

How could we even understand, much less constrain a mind making this sort of progress?

The other thing that’s worrying frankly, is it. Imagine

that imagine the best case scenario. So imagine we hit upon a design of super intelligent AI that has no safety concerns, we have the perfect design the first time around. It’s as though we’ve been handed an Oracle that behaves exactly as intended. Well, this machine would be the perfect labor saving device. It can design the machine that can build the machine that can do any physical work, powered by sunlight, more or less for the cost of raw materials. Okay, so so we’re talking about the end of human drudgery. We’re also talking about the end of most intellectual work.

So what would apes like ourselves do in this circumstance? Well, we’d be free to play frisbee and give each other massages.

Add some LSD and some questionable wardrobe choices and the whole world could be like Bernie man.

Now, that might sound pretty good.

But ask yourself what would happen under our current economic and political order? Okay, it seems likely that we would witness a level of wealth inequality and unemployment that we have never seen before, absent a willingness to immediately put this new wealth to the surface of all humanity.

Okay, well, a few trillionaires could graced the covers of our business magazines, while the rest of the world would be free to starve.

The Russians or the Chinese do if they heard that some company in Silicon Valley was about to deploy a super intelligent AI, this machine would be capable of waging war. Right Right, whether terrestrial or cyber with unprecedented power.

This is a winner take all scenario to be six months ahead of the competition here is to be 500,000 years ahead at a minimum

That even mere rumors of this kind of breakthrough could cause our species to go berserk. Now, one of the most frightening things, in my view, at this moment,

are the kinds of things that AI researchers say when they want to be reassuring. And the most common reason we’re told not to worry this time is all long way off. Don’t you know this is this is probably 50 or 100 years away. One researcher has said worrying about AI safety is like worrying about overpopulation on Mars.

This is the Silicon Valley version of Don’t worry your pretty little head about it. No one seems to notice that referencing the time horizon is a total non sequitur. Okay, if intelligence is just a matter of information processing, and we continue to improve our machines, we will produce some form of super intelligence and we have no idea how long it will take us.

To create the conditions to do that safely. Let me say that again, we have no idea how long it will take us to create the conditions to do that safely.

And if you haven’t noticed 50 years, it’s not what it used to be. And this is 50 years in months. This is how long we’ve had the iPhone.

This is how long the Simpsons has been on television. 15 years is not that much time to meet one of the greatest challenges our species will ever face.

Once again, we seem to be failing to have an appropriate emotional response to what we have every reason to believe is coming, that the computer scientist Stuart Russell has a nice analogy here. He said, Imagine that we received a message from an alien civilization, which red people of Earth we will arrive on your planet in 50 years. Get ready. And now we’re just counting down the months until the mothership lands okay. We

would feel a little more urgency than we do.

Another reason we’re told not to worry is that these machines can’t help but share our values because they will be literally extensions of ourselves so be grafted onto our brains and will essentially become their limbic systems.

I’ll take a moment to consider that the safest and only prudent path forward recommended is to implant this technology directly into our brains. Now that this may in fact be the safest and only prudent path forward but usually one safety concerns about a technology have to be pretty much worked out before you stick it inside your head.

Okay, the deeper problem is that building super intelligent AI on its own, seems likely to be easier than building super intelligent AI and having a completed neuroscience that allows us to seamlessly integrate our minds with it. And given that the companies and governments doing this work, are likely to perceive themselves to be in a race against all others. Given

To win this race is to win the World, provided you don’t destroy it in the next moment, then it seems likely that whatever is easier to do will get done first.

Now, unfortunately, I don’t have a solution to this problem, or from recommending that more of us think about it. I think we need something like a Manhattan Project on the topic of artificial intelligence, not to build it, because I think will inevitably do that, but to to understand how to avoid an arms race, and to build it in a way that is aligned with our interests. When you’re talking about super intelligent AI that can make changes to itself. It would seem, it seems that we only have one chance to get the initial conditions right. And even then, we will need to absorb the economic and political consequences of getting them right.

But the moment we admit that information processing is the source of intelligence, that some appropriate computational system is what the basis of intelligence is, and we admit that we will improve

These systems continuously. And we admit that the horizon of cognition very likely far exceeds what we currently know. Then we have to admit that we’re in the process of building some sort of God. Now would be a good time to make sure it’s a god we can live with.

Leave a Reply

Your email address will not be published. Required fields are marked *

*