We’re coming down to the wire, and Democrats’ hopes of holding onto the Senate and the House are fading fast. Two months ago, the story was that Democrats seemed poised to pull off an upset and hold onto the Senate despite the fact that the party in power almost always loses seats in the midterm election. But now, the Senate looks like a toss-up. It’s not just Democrats who are facing challenges this year—pollsters are too. Error margins are rising as fewer people are responding to survey calls. That means we’re flying a bit out of our depth: Political campaigns, commentators, and voters can’t be sure that the polling averages that they’re seeing in the news are an accurate reflection of reality. Today’s guest is Kristen Soltis Anderson, a Republican pollster and the cofounder of Echelon Insights. We discuss the closest races in Georgia and Pennsylvania, whether Donald Trump is an overall help or hindrance to the GOP, and why the golden age of polling is over.
In the following excerpt, Derek and Kristen Soltis Anderson discuss why polling has become less trustworthy, and why the way we use polls has become less effective.
Derek Thompson: I do want to talk to you about the midterms and the Republican surge in just a minute. But first, I really want to talk to you about the quality of polling.
In 2016, obviously, polling was famously off. In 2020, pollsters said they fixed the problem and polls were off again. There was a New York Times article that came out today that interviewed a bunch of pollsters, and some of the quotes got me a little bit freaked out.
Ann Selzer, who is a prominent Iowa pollster, said this to The New York Times: “There isn’t a pollster who is telling the truth, who doesn’t worry all the time about [falling response rates]. Do I feel like there is a doomsday clock ticking? Yeah, I kind of do.”
Kristen, what is she talking about? And how worried are you about the quality of polling right now?
Kristen Soltis Anderson: I’m very worried. And I say this as someone who has been working in this field for a decade and a half, and someone who takes pride in her work and feels confident in the stuff that I’m doing at my firm. But I would have to say that I think Ann Selzer’s take on this as a doomsday clock, maybe I wouldn’t use exactly the same metaphor—the way I would describe it, it’s sort of like confronting a pandemic: You know that there’s a problem, and you’re trying to figure out how to treat it, and you’ve got to develop experimental medications to treat it. And right now the polling world is in the “We are developing an experimental cure. We’re not sure if it’s going to work, and we don’t totally know what the side effects are” type mode.
And so, whenever people are asking me about whether they can trust the polls, I say, “Look, in some ways it’s a miracle that the polls are as good as they have been considering how few people take polls, how fast the technology is changing,” and so on and so forth. But this is a year when, unlike previous years when the polls have been wrong and pollsters went, “Aha! That’s what was wrong, and here’s how you fix it,” there’s still a big question mark lingering out there after 2020. And so, everybody is kind of throwing stuff at the wall to see what sticks, to see who solves the problem for 2022. And then, even if you solve it for 2022, there’s no guarantee that that means you’ve got the right answer for 2024 and beyond.
Thompson: So, if we want a sophisticated understanding of the shape of this problem, what’s going on? Why has polling seemed to become so much less trustworthy over the last few cycles?
Soltis Anderson: Well, in some ways, I think it’s a combination of the polls themselves becoming a little less trustworthy, and the way that we use polling becoming a little less effective.
So, on the one hand, there’s some great analysis done by the big association of pollsters. It’s called AAPOR. They put out a big report after the 2016 election that looked historically at how accurate polls have been. And they found that, actually, for most of the 20th century, polling wasn’t great. It tended to be off, on average, by a couple of points here or there in most elections. You had a couple that were pretty good: 1984, 1988. The polls in 2000 said it was going to be a pretty close election, and it was.
Nowadays, though, when polls are off by 2 or 3 percentage points, that really causes a lot of alarm because suddenly if a race was supposed to be close and then somebody wins by 2 or 3 points, people say, “Oh, well, the poll said that was going to be close. Look, it wasn’t that close.” We now have so much use of polling in punditry, and that use of polling in punditry means that even little shifts wind up getting blown out of proportion in the coverage. There’s so much more attention paid to it, so many more people following it, that you could have put out a poll that was kind of wrong back in 1982, and it wouldn’t have dominated the news cycle, and what have you, and changed the way reporters were covering the race, I think, in the way that it does now.
Thompson: It’s amazing. You sent me this report just before we pressed the record button. I’m really interested in knowing what’s the Golden Age of polling? When was it that polls were supposedly just so wonderful? And you go back to the 1930s, 1940s, when national polling really starts to take off. Polling was awful. The 1936 election was off by 12 points. The 1948 election was off by almost 10 points. I mean, polling in the middle of the 20th century was a disaster. It looks like by the time you get to the late 1980s, mid-1990s, that’s what we might call the Golden Age of polling.
So I feel like one way to help us understand what went wrong is to juxtapose now versus the 1990s when the average error in vote margin was really, really small. What are the most important differences?
Soltis Anderson: One of the big differences is that back in that Golden Age, everyone was reachable in the same sort of fashion. Now, of course, not everyone had a landline phone. You’ve always had some form of bias, but generally people had landline phones in their home, which, for a variety of regulatory reasons, are not too hard to call. You could call people during dinnertime at home. You got about three to four out of 10 people you called would pick up the phone and take your survey. And so it was this uniformity of how you could reach people paired with this willingness to talk to pollsters that we don’t have now. Nowadays, the percentage of people that have landlines in their home is extremely small. People now tend to have cellphones. But I don’t know about you, I’m a pollster and I don’t pick up numbers that I don’t know that call my cellphone.
So caller ID, the rise of cellphones—there are regulatory reasons why it is very hard for pollsters to call cellphones or it’s very expensive to do so. So people are less likely to pick up, and you aren’t able to contact everyone in the same method. So if your poll just calls people on the phone, you’re missing people that don’t have a reliable phone or don’t have a landline, certainly. But then, if you do a poll that’s just online, you’re also systematically missing anyone who maybe doesn’t have broadband. Maybe they’re not really comfortable using the internet that much. You wind up with different biases for different methods. And that’s just not the world that you had when pretty much everyone was reachable by a landline phone back in the ’80s and ’90s.
This transcript was edited for length and clarity. Listen to the rest of the episode here and follow the Plain English feed on Spotify.
Host: Derek Thompson
Guest: Kristen Soltis Anderson
Producer: Devon Manze