Jackson Mississippi's Source for News and Jazz
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
Public media is under attack! Stand with WJSU by donating today.
Text WJSU to 71777 or click the Donate button.

Expert talks about the Pentagon's use of artificial intelligence

STEVE INSKEEP, BYLINE: A new documentary features people who research and build artificial intelligence. A filmmaker questions them in "The AI Doc: Or How I Became An Apocaloptimist." Several experts struggle to answer the same question.

(SOUNDBITE OF DOCUMENTARY, "THE AI DOC: OR HOW I BECAME AN APOCALOPTIMIST")

DANIEL ROHER: Do you want to have kids one day? Is that something that you're into or not really?

DAN HENDRYCKS: I confess - I think that it's like, wait. Let's get through this critical period.

ROHER: Do you have any kids?

ELIEZER YUDKOWSKY: I do not.

ROHER: Is that something you want to do - have children? Have a family?

YUDKOWSKY: In some other world than this world, sure.

ROHER: Would you want to start a family?

TRISTAN HARRIS: I...

INSKEEP: That last person was Tristan Harris. He once worked for tech firms and now runs the Center for Humane Technology, which advocates for regulation. We invited Harris over to hear why he thinks market incentives are taking AI in a dangerous direction.

HARRIS: The only way they can justify the amount of money that's been invested into this AI industry is if they race to replace economic labor in the economy. That's the stated mission statement of OpenAI - is to be able to do everything that a human laborer can do.

INSKEEP: The OpenAI charter defines artificial general intelligence as highly autonomous systems that outperform humans at most economically valuable work.

HARRIS: So there you are. We're sitting in front of computer screens...

INSKEEP: Yeah.

HARRIS: ...In NPR. Anything you can do on that computer - answering emails, scheduling something, doing financial analysis, marketing, research - all of that you'll be able to do with AI. They're not racing to augment and support human workers. They're racing to replace human workers. And what that will lead to is unprecedented levels of concentration and wealth and power because essentially, all the money in the economy - instead of being paid to individual laborers, you're going to pay five to 10 AI companies to do all of the work.

INSKEEP: We should clarify - you did not produce this film. You...

HARRIS: Right.

INSKEEP: ...Influenced the filmmakers. You're in the film.

HARRIS: Yeah.

INSKEEP: And there's a moment a little later where you're still talking about kids. And you quote AI researchers or scientists saying they have kids and do not expect the kids to make it to high school.

HARRIS: Yeah. Well - so the problem is that AI carries so much risk. Let me give you a very concrete example. Just weeks ago, there was a new study at a university where the guy basically took all the leading AI models and put them in various war-game scenarios, and just seeing in this war game, what would you do to sort of beat this war?

INSKEEP: Is this, like, Claude versus ChatGPT?

HARRIS: That kind of thing.

INSKEEP: OK.

HARRIS: Yeah. Exactly. And they generated 780,000 words of reasoning instantly. And they're going back and forth in a turn-by-turn scenario. And in this scenario, the AI escalated to the use of nuclear weapons 95% of the time. What this says is that the AI - again, it's reasoning in a way that we don't understand, and we don't control what it's going to do.

INSKEEP: Now, the one possibility is AI destroys the world, as you just laid out with the war-game scenario. But you alluded to another one, which is a handful of companies - and in a sense, a handful of individuals - end up with all the money and the power.

HARRIS: Yes.

INSKEEP: Is that already happening?

HARRIS: In a way, I think it is already happening. There's something in economics called the resource curse - when a country gets more and more of its economic output from mining one resource. When that happens, the country becomes sort of addicted to just that resource being mined and doesn't want to invest in its people. It just wants to invest in that resource. With AI, there's something people worry about. There's an essay by Rudolf Laine and Luke Drago called "The Intelligence Curse." So what happens when in the United States, a few years from now, all of the economic output is coming from five AI companies that are automating all the science, automating all the physical labor, automating all the robotics and physical - etc.? Suddenly, why as a government would you invest in health care or child care or people? Because...

INSKEEP: Oh, because the people aren't valuable anymore.

HARRIS: 'Cause the people aren't valuable anymore.

INSKEEP: Hopefully, the people still vote, though. They would have some kind of power.

HARRIS: Well, that's an interesting question. Do you have to listen to the people's political power if you don't get your tax revenue from the people anymore, you get it from AI companies? And if companies - you can't use your bargaining power. You can't, like, withdraw your labor like a labor union because the companies don't need you either. So I'm not saying this to scare people. I'm saying this to clarify an incentive for governments to say, we should be investing in data centers and prioritizing electricity for data centers, not for people. It leads to things like Sam Altman saying just a few weeks ago - when he's asked - well, doesn't it take a lot of energy to train AI? - you know what his response was? Well, doesn't it take a lot of energy to grow a human over 20 years? This is leading to an antihuman future. It leads to the devaluing of humans and the valuing of AI. And this is something that's very dangerous. And the thing that people should take away is this is the last chance that our political voice will matter.

INSKEEP: As you know very well, the Pentagon recently had a conflict with Anthropic.

HARRIS: That's right.

INSKEEP: The Pentagon was using Claude, their AI platform, for war-fighting purposes. Anthropic was insisting on a couple of limitations - that this technology should not be used for mass surveillance of Americans and should not be used for the automated firing of weapons without a human being involved. Pentagon said, no, that's too much - we will not accept your limitations - and effectively fired them. What did you think about that as you watched it from the outside?

HARRIS: Well, there's a next part of that story, which is that after Anthropic said, we don't want to support mass domestic surveillance of Americans and we don't want to support autonomous weapons, you'll notice that Sam Altman jumped right in and said, we'll sell ChatGPT to the government to do both of those things. And what you saw subsequently to that was the largest drop in subscriptions to ChatGPT that has happened, I believe, and a huge surge of subscriptions into Claude and Anthropic.

INSKEEP: I guess we should clarify - ChatGPT had...

HARRIS: ChatGPT...

INSKEEP: ...At some point also said, we'll have restrictions. There was some question about that. And...

HARRIS: Yeah.

INSKEEP: The standard was, for the Pentagon, any lawful purpose.

HARRIS: Yeah. The point I'm trying to get at is that mass boycotts are actually very effective because these companies really need those user numbers going up to tell investors that the health of our company is really strong.

INSKEEP: Let me ask about this from the flip side, though. If you're not in favor of mass surveillance of Americans, if you're not in favor of AI-driven war, you might cheer Anthropic for having those rules and sticking up for its principles. But there's another way to look at it, which would be - the government that I elect should be deciding how this technology is used and not a particular company. I mean, letting the company decide might be the equivalent of when Elon Musk was deciding which Starlink satellites he wanted to activate in Ukraine.

HARRIS: No, definitely. There's huge governance issues here, and what this really is getting at is the nature of how technology is really the primary driving force of power in this world. And - how do you govern technology? - is the central question.

INSKEEP: So you feel the government should decide, and we have to push the government to do it competently.

HARRIS: We need to make sure that there are more experts and more citizen deliberations brought into that process, in addition to the government ultimately making the decision.

INSKEEP: Tristan Harris, it's a pleasure talking with you. Thank you.

HARRIS: Thank you so much, Steve.

INSKEEP: Tristan Harris is an advocate for AI regulation who appears in a new documentary called "The AI Doc."

(SOUNDBITE OF MUSIC) Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Steve Inskeep is a host of NPR's Morning Edition, as well as NPR's morning news podcast Up First.