Discover
anything

Plain English With Derek Thompson

How Superintelligent AI Could Upend Work and Politics

How Superintelligent AI Could Upend Work and Politics
How AI Could Upend Work and Politics
Play episode

About the episode

Many AI experts believe that sometime in the next few years, we will build something close to artificial general intelligence (AGI), a system that can do nearly all valuable cognitive work as well as or better than humans. What happens to jobs, wages, prices, and politics in that world?

To explore that question, Derek is joined by Anton Korinek, an economist at the University of Virginia and one of the leading thinkers on the economics of transformative AI. Before he focused on superintelligence, Anton studied financial crises and speculative booms, so he brings a rare mix of macroeconomic skepticism and technological optimism. They talk about quiet AGI versus loud AGI, Baumol’s cost disease, robots, mass unemployment, and what kinds of policies might prevent an “AGI Great Depression” and keep no American left behind.

If you have questions, observations, or ideas for future episodes, email us at PlainEnglish@Spotify.com.

In the following excerpt, Anton Korinek shares with Derek the reasons he is optimistic about the potential of AI despite the high level of speculation in the industry right now. 

Derek Thompson: So in the last few weeks, we’ve done several shows on AI and what happens if it goes wrong and turns out to be a big bubble. This episode begins with the opposite premise. What if AI goes right? If we take seriously the predictions of these frontier labs that we are within a few years or a decade of building AGI, artificial general intelligence, what happens to the economy? So before you were one of the go-to experts on the economics of AI, you were a financial economist studying financial crises. So I actually wanted to begin with this: How much does this AI infrastructure build-out remind you of things like the dot-com boom and the housing boom?

Anton Korinek: It feels very much like that. There’s a lot of speculative frenzy right now. It feels like if you are an entrepreneur and you say, “I do AI,” you can easily raise double-digit millions. And in the past, I would’ve said, “This has all the hallmarks of a speculative frenzy in the bubble.” But at the same time, I believe that even though there may be some short-term frenzy going on, in the medium term, AI is going to be much more impactful and much more powerful than any previous invention.

Thompson: Why?

Korinek: So if the bets that the leading AI labs are making right now, if they come true and we have artificial general intelligence—which you asked me to define before, so the charter of OpenAI, for example, would define it as something like machines that can perform virtually all valuable economic work, or I think Dario Amodei described what he called “powerful AI” in an essay last year where he said this would be like a country of geniuses in a data center—and if we have any of those visions, it would utterly transform our world.

Thompson: But why should we believe them, right? You’re talking about OpenAI; you’re talking about Dario Amodei, the CEO and founder of Anthropic. These are businessmen. They’re raising money. They’re spending billions of dollars more than they’re actually bringing in. They need—existentially, to stay alive as companies, they have to persuade their investors that they are working on something that is absolutely ginormous in its implications in order to justify the amount of capital that’s going into building these machines. So of course they’re going to say, “Oh, it’s a nation of geniuses in a data center. This is going to change the world.” Why do you believe them? Why do you think they’re right that we might have something like artificial general intelligence by the end of the decade?

Korinek: Right now, it’s still a bet. And I think if you catch them in private, they are probably going to say as well that this is a bet. There is no certainty about this at all, but we have a number of indicators that suggest that we are on a curve. We are scaling these things, and there are predictable relationships that tell us if we put more and more computational power into these systems, they’re going to get better. So that’s kind of the first indicator. It’s just extrapolating a curve. And we know there’s some risks with extrapolating things. Sometimes relationships suddenly stop, and it won’t work anymore. 

Then the second indicator is just my personal, lived experience. Part of what I do in my research is I follow the capabilities of AI systems very closely, and I write regular reports about how you can best use AI systems in science. And there, I’ll say, I have just continually been blown away every time that I wrote another piece on this topic over the past couple years because advancements have been so quick. 

And then the third point that I think a lot of people in this space are making is we are talking about neural networks, and there is a proof of concept in nature that is a sufficiently advanced and properly wired neural network, which is what we all have in our skulls. Our brain can be generally intelligent. So at some level, all the artificial neural networks that we are designing nowadays are inspired by biological neural networks. There’s some differences, and we run them differently, obviously, but the fundamental power of neural networks in biological brains and in silico is the same. And so, in some sense, the bet that the frontier AI companies are pursuing is that, well, we see that there are biological neural networks that are generally intelligent. We are betting we can reproduce something like that, even if it looks a little bit different in silico.

This excerpt has been edited and condensed.

Host: Derek Thompson
Guest: Anton Korine
Producer: Devon Baroldi

More on the AI Boom