© 2024 Milwaukee Public Media is a service of UW-Milwaukee's College of Letters & Science
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Experts Petition To Keep Computers On Humanity's Side

SCOTT SIMON, HOST:

Ever since computers appeared, humans have eyed them with awe, envy and suspicion. Could these machines, created by man to think faster than man, one day outthink our species and decide that they can do without us? After all, humans can be so slow and sentimental. This is not just a storyline for science-fictions writer. Last week, a host of experts in the field signed the letter urging research to make sure that artificial intelligence has a positive, not destructive, impact on humanity. Elon Musk, the billionaire tech investor, pledged $10 million to the organization behind that letter - the Future of Life Institute.

Philosopher Nick Bostrom is among those who signed that letter, and he joins us from the studios of the BBC in Oxford. Thanks so much for being with us.

NICK BOSTROM: Thanks for inviting me.

SIMON: How could machines that we've invented, that we program, that we can turn on and off, potentially undo us?

BOSTROM: Well, right now, you'll be happy to hear I don't think there is any danger. I think the concern is that as we continue to make progress then eventually there might come a point when they are as smart as we are or even smarter. And intelligence in general is a very powerful thing. It's what makes us very powerful relative to other animals here on this planet. It's not because we are stronger muscles or sharper claws that the fate of the gorillas is now in our hands rather than in the gorilla's hands. And so similarly, if we create machines that exceed human intelligence then those machines also could be very powerful relative to us.

SIMON: Well, I was flabbergasted when I began to do some reading. Although you say it's not about to happen; it's a while away; it is possible that children listening to this broadcast in the back seat of their parents' automobile might have to confront this?

BOSTROM: Yeah, so that's why I think it's very positive that Elon Musk and some other people are not saying it's time to actually start work out the technology that we would need to keep machines safe, even if they eventually reach human level intelligence.

SIMON: I mean, what's to solve? What do you do?

BOSTROM: Well, one form of this problem is if you have a superintelligent agent, how could you construct it in such a way that it would be safe and beneficial? Ideally, we would like to align its goals with human values. Right now, what we could do with a machine - we might be able to give it a goal, such as calculate as many digits in the decimal expansion of pi as possible, or some very simple goal like that. We could program that. What we couldn't do today is to give a computer the goal of maximize beauty or justice or pleasure or love or compassion or any humanly meaningful value. So one research challenge is how could we find ways to transfer human values into machines, such as to create a goal system that would want the same kind of things that we want?

SIMON: Mr. Bostrom, if a computer gets too cheeky, can't we just pull the plug out or, you know, smash his hard drive into smithereens?

BOSTROM: Well, today, certainly that's a possibility. Although, when we become very dependent on a system, like the Internet, it's not so clear any longer, like where is the off switch to the Internet? But when you have something that eventually becomes as smart as a human being, then it will be capable of strategizing and anticipating our actions and taking countermeasures. So our great advantage is, I think, that we get to make the first move. We get to create the AI. But it's important, particularly with superintelligence, that we would get it right on the first try.

SIMON: All my life I've heard about how subtle and intricate and powerful the human brain is - so much so that we haven't begun to understand it. I'd like to think that's still true. Can't we learn as fast as the machines?

BOSTROM: Well, the human brain is as complicated as it's ever been, and nobody knows today how to actually create machines with the same type of general learning ability that humans have. But there is no reason to think that biological information, processing systems like the human brain are anywhere near the limits of what is possible. The laws of physics, it looked like they would enable much more powerful forms of information processing. Much in the way that, say, a steam shovel or a tractor outperforms human muscles, I think intelligence would also be surpassed.

SIMON: Mr. Bostrom, as I understand it, you have spent a lot of your working life dealing with nuclear war and asteroid strikes and now the prospect of superintelligent machines. How you doing?

(LAUGHTER)

BOSTROM: Well, I actually also spend some time thinking about the upside of things, like, if things go well, if humanity makes it through this century intact, I think the future for Earth originating intelligent life could be very long and very large and very bright, indeed. We might reach technological maturity in this century, and with that would come the possibility to colonize the universe and to cure cancer and to accomplish many other things that we can only vaguely dream about today. It's worth being really careful and making sure that we don't screw up along the path.

SIMON: Nick Bostrom is the founder and director of the Future of Humanity Institute at Oxford University and the author of "Superintelligence." Thanks for speaking with us, Mr. Bostrom, and I don't mind saying good luck.

(LAUGHTER)

BOSTROM: Thanks. Transcript provided by NPR, Copyright NPR.