Hosts
About the episode
Last week, the Bureau of Economic Analysis published the latest GDP report. It contained a startling detail. Spending on artificial intelligence added more to the U.S. economy than consumer spending last quarter.
This is very quickly becoming an AI economy.
I’m interested in how AI will change our jobs. But I’m just as curious about how it will change our minds. We’re already seeing that students in high school and college are using AI to write most of their essays. What do we lose in a world where students sacrifice the ability to do deep writing?
Today’s guest is Cal Newport, the author of several bestsellers on the way we work, including Deep Work. He is also a professor of computer science at Georgetown.
One of the questions I get the most by email, in talks, in conversations with people about the news is: If these tools can read faster than us, synthesize better than us, remember better than us, and write faster than us, what’s our place in the loop? What skills should we value in the age of AI? Or, more pointedly: What should we teach our children in the age of AI? How do we ride this train without getting run over by it?
If you have questions, observations, or ideas for future episodes, email us at PlainEnglish@Spotify.com.
Host: Derek Thompson
Guest: Calvin Newport
Producer: Devon Baroldi
Summary
In the following excerpt, Derek talks to Cal Newport about how college students are using AI.
Derek Thompson: I want to start with observations before we move into analysis. In your reporting, in your experience, have you seen major changes in the behavior of students in the age of AI?
Cal Newport: It’s shifting rapidly. In my own case, for example, last year, I did a story for The New Yorker about how students are using AI in the context of writing. And the premise of the story was I looked over the shoulder, virtually speaking, of a couple students to actually watch how they’re interacting with AI. And what I found in that story, the thesis of that story, is that a lot of what students were doing when they were writing with AI, they were not outsourcing writing to the AI. They weren’t saving time, either. So it wasn’t an efficiency play. My argument was they were having these almost parasocial back-and-forth interactions that were aimed at reducing the peak cognitive strain of the process of writing a paper. So it was this interesting interactive back-and-forth they would have: “Well, what about this? What do you think about that? OK, I’m going to write this. What do you think about it?”
My thesis was it was trying to make the peak difficulty of writing better. When you look today, so let’s fast-forward a year later, there’s been multiple things that have reported on this more recently, and it seems like this parasocial relationship, this “I’m in this interaction with you to take difficulty, smooth over hard peaks,” has really maybe metastasized. So there’s a big new article out recently from The Guardian where a reporter convinced, I think it was three students, to basically let him have full access to their ChatGPT Plus accounts so that he could see every single thing that they were saying over a period of whatever it was, a couple of weeks. There’s a huge transcript. And he says their interaction with it was constant. It was just all day long. Not just about homework assignments, not just about a paper they were writing. It is “What does it mean to be a human being?” “What do you think it means that this girl said this to me when we were walking by in the cafeteria?” That it had become a sort of interactive, back-and-forth partner.
So there’s this interesting thing I think happening among young people with these tools where it’s expanding beyond its role in, pragmatically speaking, “This is helping me do certain work,” which we can get into because it is. And I think it’s making a big difference in how college actually operates. But it is immediately seeming to move over into the space where social media and other tools were as well in this diversion, distraction, tickle-your-brain type of context. It’s interesting. It gives you a release in the moment. It prevents you from being bored. It prevents you from having to feel some negative strain. So it really expanded its footprint, it looks like. It’s starting to expand its footprint in students’ life more than I thought it would. So a lot of things are going on here.
Thompson: One thing I hear you saying is that while a lot of people think about AI as an economic technology, you are watching the way that it’s already become a social technology. You talked about the parasocial relationship that people have with the large language models they’re interacting with. They’re talking to it like it’s a professor, like it’s a friend, like it’s a research assistant. That’s how I use, say, deep research. Let’s ground this at the level of college. How has this changed the way students and professors operate?
Newport: If we look at it just from a functional perspective, it’s changed a lot of things. I teach mainly math-type courses or theory-type courses. During the pandemic, obviously, you had to administer these exams remotely because there was no one in person at all. There was a brief window after the pandemic where it was thought, “Oh, this is convenient. Why not keep doing it this way?” Right? During exam period, this way we don’t all have to stick around for five days instead of going home to take the exams. Let’s keep doing this online. I was doing it synchronously online, essentially.
Post-ChatGPT, you can’t do that, not for an introductory, discrete mathematics course, because every single problem on that exam can probably be answered for you by ChatGPT. Because not only is it not that it’s not very complicated math, but as I learned when I’ve experimented with using ChatGPT to help write problem set problems, for a lot of these problems, it turns out there’s only so many good examples that are out there. There’s only so many actual ways to test a sophomore-level undergraduate on doing strong induction proofs or something like this. There’s really four good examples, and you can obfuscate them, but that’s that. So you could put almost every problem into ChatGPT, right? So, OK, we can’t do remote exams anymore.
I think the same thing is happening, I picked this up in my reporting. Whenever you have lower-stakes writing, there used to be a big thing where you would say, “Come on, have a response essay at the beginning of each class.” This way, I’m not going to look at it too carefully, but you have to do the reading because you had to write 500 words, your thoughts or whatever. As one of the students told me in my reporting, “Oh, those types of essays are eminently ChatGPT-able.” That was the phrase, because exactly that type of writing is very ChatGPT-able.
When it comes to large papers, what seems to be happening is we thought when this technology first came out, we being professors, that maybe students would be able to essentially recursively generate the entire paper from scratch. You can’t ask ChatGPT, “Give me 10,000 words on, you know, Jung and the collective unconscious.” But what you can do is say, “Give me an outline for an essay like that.” “OK, let’s look at Section 2 of this outline. Break that down into the three subsections.” “OK, let’s look at this subsection here. Can you give me some drafts of texts?” So there was some fear that doing that, you could basically have a whole essay produced. It would be like a custom version of those cheating websites that were around when we were growing up, where you could pay for papers on the internet. That’s not working too well. Those papers aren’t coherent, and the tone isn’t that great.
So on long papers, it’s more that students are using it for ideas. They’re using it for “Give me a structure for this argument.” It’s less about the craft of creating the actual words than it seems to be the craft of actually critically thinking about what I want to say. So that is creating changes, I think, for the humanities. There’s been a move toward a lot more in-class assessment. If it’s a quantitative class, there’s a move toward, probably what we need to do is have a quiz in the first part of class once a month and weigh that more than we’re going to weigh problem sets. There’s more of a move for intro mathematics classes that the problem sets are basically for practice. You should do these problems. That’s how you’re going to practice, but it’s going to be less of a graded assessment. That temptation is too hard, that you could basically solve any one of these problems.
So there is definitely a big shift that we have to deal with. I think the consumer internet caused a comparable size shift in higher academia. That introduced a huge amount of changes. I mean, I was a student and grad student just as that got really big. That introduced a lot of changes we had to adapt to. These are similar, what’s happening, but they’re numerous as well.
This excerpt has been edited and condensed.
Host: Derek Thompson
Guest: Calvin Newport
Producer: Devon Baroldi
More Episodes on AI