Discover
anything

Plain English With Derek Thompson

Anthropic Thinks AI Might Destroy the Economy. It’s Building It Anyway.

Anthropic Thinks AI Might Destroy the Economy. It’s Building It Anyway.
Anthropic Thinks AI Might Destroy the Economy
Watch episode

About the episode

Today’s podcast is an interview with one of the cofounders of the AI company Anthropic, Jack Clark. One thing I’m trying to do with the subject of artificial intelligence is offer a balance of perspectives on an issue that tends to receive mostly one-sided coverage. Some people are certain that AI is a bubble; some are certain it is not. Some are certain that AI will destroy millions of jobs; some are certain that it will not. I want listeners of this show to feel like every time they hear an intelligent take on one side of this issue, the next episode they’ll hear a countervailing take. Two weeks ago, you heard the investor and writer Paul Kedrosky argue that AI was an economic bubble.

But if any single data point pierces that narrative, it’s this. From December 2025 to this month, March 2026, Anthropic has more than doubled its annual recurring revenue, from $9 billion to nearly $20 billion. According to several analysts, there is no record of any company growing this fast at this scale.

Now, I don’t need Jack Clark or anybody at Anthropic to read me a corporate statement about the company’s revenue growth. I can read that myself. What I wanted to do today is ask questions that only someone in Jack’s position can answer.

If Anthropic’s executives believe that AI might be as dangerous as nuclear weapons, what right does any private business have to build this sort of thing for profit?

How does the company balance its reputation as the industry leader in caution and safety with its other reputation as one of the fastest developers of this technology?

And if artificial intelligence has the capacity to produce a country of geniuses in a data center—as Anthropic’s CEO insists—why do Americans overall say that they disapprove of artificial intelligence more than just about every other institution and individual in the world?

Subscribe to our YouTube channel here

If you have questions, observations, or ideas for future episodes, email us at PlainEnglish@Spotify.com.

 

In the following excerpt, Derek talks to Jack Clark about the potential of AI use and governance.

Derek Thompson: So we are meeting each other in a shared space of mutual exhaustion, which is always nice.

Jack Clark: Sure.

Thompson: Hopefully that leads to some kind of symbiosis. I was thinking about holding this question for the end, but it might be the most important question I ask, so I might as well just get it out in front. You’re building a technology that you think is going to change the world and change the nature of work more than anything since the computer, maybe electricity, maybe anything else. If you’re right, our kids’ futures are going to be profoundly reshaped, maybe ruined, by this technology, and I wonder how that sits with you. You go to work and you work on Claude, and then you go home and you raise your children. When you bridge those two lives, how do you think about the art of raising kids in a world where there’s a technology coming on down the pike that will always already be smarter than us at almost everything, which is at least the goal of your company? How do you sit with that, and how do you think about raising your kids?

Clark: Yeah. I mean, I spend a lot of time thinking about this, but I also think, as you know, when you become a parent, all the clichés are true, and things that you learn are, like, it’s not about external validation, it’s really about having a good sense of your own self, and various pat phrases like this. But when I look at my kids and I think about myself and my own experience of this technology, being curious about the world, being interested in the world, and getting joy from experiencing the world and learning about it are how I stay calm and stay ready for this technology evolution that’s happening all around us.

When I look at my children, the main thing I’m doing is spending time encouraging them to develop passions like reading and playing and exploring the world, because whatever happens with the technology, getting through any period of change requires you to have some sense of yourself that isn’t massively contingent on a changing environment outside and some sense of innate curiosity and a world that you can live in inside your own head. I think that just stems from encouraging curiosity and encouraging them to get to know themselves.

Jack Clark during the Hill and Valley forum at the U.S. Capitol in Washington, D.C., on April 30, 2025

Getty Images

Thompson: You said “curiosity” several times, and I think I agree that that’s a value that artificial intelligence might amplify. What does curiosity mean to you?

Clark: For the first time, we have a technology that lets you really follow your curiosity to almost like the absolute limit of it. I’m reminded of when I was a kid. I’m sure that you were the same. I would go on interesting research expeditions. I would research ant colonies, or I’d research black holes, or I’d research how city planning worked, and I would follow that interest to extraordinary points. I’d learn aspects of time dilation around black holes, or I’d learn about how to implement ant colony simulations on my computer or whatever. I’d indulge my curiosity, and it was incredibly fun. And now we have a technology that lets anyone take something they’re curious about and kind of take that to the absolute limit.

And I think that this is just wildly exciting and also good for you. Whatever happens to labor and employment, and big changes are surely coming, being able to exercise your own curiosity and derive satisfaction from that I think is really important. When I was a kid, I didn’t have any ambitions that I would be the world’s best physicist or the world’s best town planner. I just found this stuff fun to think about and enjoyable. And I think that the more we encourage people to get good at that stuff, the more well set up we’ll be for what this technology will bring us.

Thompson: We’re going to return to some of those themes in a second when we talk about AI and the labor force, but I want to get to the news. I think as most listeners know at this point, Anthropic was in a spat with the Pentagon over contract details that ended with the company being designated a supply chain risk. I know that you are extremely limited in what you can say about the details of the case because your company is in active litigation against the Department of Defense, War, whatever. I hope this question, therefore, arrives at the right level of altitude for you to be able to answer it.

Anthropic has compared artificial intelligence to nuclear weapons on several occasions. This is not a rare analogy. And just most recently in January, Dario Amodei, the CEO of Anthropic, said the Trump administration’s decision to allow Nvidia chips, advanced Nvidia chips, to be exported to China was “a bit like, I don’t know, like selling nuclear weapons to North Korea and bragging, ‘Oh, yeah, Boeing made the casings.’” The U.S. does not allow private companies to build nuclear weapons. That is the law. If artificial intelligence is just like nuclear weapons, why should we allow private firms to build it for profit?

CEO of Anthropic Dario Amodei, Yoshua Bengio, and Stuart Russell are sworn in for a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing in Washington, D.C., on July 25, 2023

Getty Images

Clark: AI is fundamentally like everything. It’s like a factory that produces cars, micro-scooters, animals, and nuclear weapons all at the same time. And the main question that we’re going to have to deal with as a society is how do you govern those factories that produce these things, and how do you decide what the appropriate uses are of the things that come out and where they should be used? So I can’t talk, obviously, about the specifics of our ongoing discussion with the Department of War. I can say that Anthropic was extremely committed to working on national security early, because we recognize that AI is going to touch every single part of life, and every single part of life is going to have its own range of incredibly thorny, difficult issues.

So ultimately, we’re going to need there to be a much larger societal conversation about how we just govern this technology in general, and we will need to reckon with the fact that the technology comes from the private sector and then flows into all of these other sectors, and that’s going to be really challenging. It’s a thing that we haven’t encountered before, because previously you didn’t have a technology that could take on this ability to become anything. You had specific technologies built by specific industries for specific purposes, and that was in many ways simpler.

This excerpt has been edited and condensed.

Host: Derek Thompson
Guest: Jack Clark
Producer: Devon Baroldi

More on AI