How to Read a Poll
CNN polling editor Ariel Edwards-Levy walks you through everything you need to know.
Polling is a staple of political journalism, but it’s also hard to get right.
As a journalist with the late HuffPost Pollster who later ran a survey partnership between HuffPost and YouGov, CNN polling editor Ariel Edwards-Levy has seen a lot of polling stories.
She talked with us about what you need to know when writing a polling story.
Suppose you've just been assigned to do a write-up of a horse-race poll. What do you do first?
I treat polling data the way I’d treat any other information I receive as a reporter. I want to know who the source is, how reliable they are and why they’re providing the information. I want to have a good sense of what the useful takeaway is from the poll, and how I can share that with my audience in a way that tells them more about the political landscape without overstating what we can learn from any single number.
Generally speaking, the first thing I’ll do is to look at the methodology statement for the survey – which should explain in detail how they conducted the survey — and the toplines — which should recount exactly what was asked and what percentage of people polled chose each response. If those aren’t available, I’m not writing about the survey until I get ahold of them.
I also want to look at how this survey fits in the context of any other data available, rather than treating each new datapoint as though it exists in its own universe. I want to know if the new data looks broadly like other polling out there, or if it’s telling a different story. Something that’s a bit of an outlier isn’t necessarily wrong, but that comparison is worth noting. Similarly, if this is the first bit of data we have, that’s worth noting too.
How do you determine if a pollster is reputable?
The most basic principle underlying just about any worthwhile poll is that it’s making an effort to be representative — to ensure that the relatively small number of people who answered the questions are reflective of whatever much larger population the poll is trying to measure the views of. Broadly speaking, pollsters’ efforts to do that fall into two buckets: their sampling (how they’reselecting people to survey) and their weighting (what adjustments they’re using to make sure their sample’s demographics look like the full population's demographics).
Over the past decade or so, the ways pollsters have gone about trying to do this have become increasingly complicated and varied – and there’s no easy answer about which one approach is the “gold standard.” I think there’s still this image many people have of pollsters just randomly selecting phone numbers and calling up folks on their landline phones. But effectively no major or reputable pollster is solely calling landlines these days. Instead, there’s an increasing push to reach people where they are, using everything from cell phone calls to text messages to postcards in the mail to panels of survey respondents — or combinations of all these approaches.
There used to be a lot of skepticism about surveys that were conducted online, which are often cheaper than phone surveys, and have a lower barrier to entry. These days, however, that's not in itself a red flag — many of the most rigorously conducted surveys out there are predominantly fielded online. That said, there are a lot of ways to do online polls, and a methodology statement that doesn't tell you anything about a poll beyond the fact that it's an online survey is wildly inadequate.
As you can imagine, all of this makes it hard to offer any hard-and-fast rules. But the one I think is more important than ever is this: pollsters need to be transparent. They should be willing to describe fully and in detail the choices that they're making. CNN, for instance, has a list of questions we look for pollsters to answer about their surveys — some of them are more technical than others, but the full list is here.
Beyond methods, I’m also looking at the interests of the pollster and the poll’s sponsors. Reporters should exercise real caution when looking at polls conducted by groups who have a stake in the results: campaigns who want to show that their candidate is ahead, advocacy groups trying to prove that their pet policy matters a lot to voters, marketers trying to show the need for a product like theirs, and so on.
I also want to drop a note here about sample size: it’s important, but the level of importance can be overstated. A sample size in the range of 1,000 respondents is fairly standard for a national poll, and a poll with a much larger sample size isn’t inherently going to be much better or more representative of the full sample. The main advantage of a very large sample size is that it can provide more precise estimates when looking at smaller subgroups – like adults younger than 30, or Black voters – whose results would otherwise have high margins of error.
How do you decide if this poll is worth writing about?
Fundamentally, I view polling data as a way to gauge ordinary Americans’ views and experiences without having to rely on anecdotes or second-hand punditry, and to give their collective voices a place in the news.
So-called “horserace” polls tend to get a lot of attention, particularly in election years, but I think the most meaningful and interesting findings from surveys are frequently elsewhere. Polls can tell us how political coalitions are being reshaped, which messages and issues are breaking through and how values are shifting over time – as well as plenty of topics that go beyond politics, from attitudes about alcohol to changing religious beliefs to how people have been affected by climate change or howsociety engages with pop culture. I wrote a bit more about this looking back at the polling last year.
Do you have some expertise in reporting? Share it with readers of Your First Byline! Let us know what you’re available for in the link below.
How do you decide on the lede of your poll story?
With any polling lede, I’m trying to strike a balance between highlighting what’s new or different in a survey, and maintaining afocus on the bigger picture of public opinion, even if that’s stable or less unexpected.
I also want to make sure I'm staying within the limits of what the data I have can actually tell me. If I'm looking at a poll where Candidate A is polling at 42% and Candidate B is polling at 41%, I'm never going to write about that as though it shows that Candidate A is "leading" or "ahead" – I'll say that it suggests a close race in which neither holds a clear advantage. (Pew Research Center, which is a fantastic resource for understanding how polling works, has a useful explainer of margin of error here.) Similarly, if I'm looking at a poll that says the share of Americans who said XYZ was 48% a month ago and is 46% now, I'd probably describe that as “largely stable."
What are some things you should always include in your write-up?
When I’m writing about a survey, I always want to include at a minimum: the name of the pollster, the identity of any sponsors, the dates during which it was fielded, the sample size, the margin of error, and some basic information about how it was conducted.
I also want to be consistent when describing which group the poll is intended to represent: “all US adults” is a different group of people than “registered voters,” or “voters who are likely to turn out in the next election.” A great recent example of why this last one matters: people are more likely to tune in for a State of the Union or other presidential address if they’re already supporters of the president. That means “speech watchers” tend to be a much friendlier group than the full American public – which is extremely important context for why such speeches tend to get such positive ratings.
And finally, a link back to the data you’re writing about is always a good thing!
Any other tips for writing about polls?
There are some bits of terminology that can trip up people who haven't written much about polls, so I'll run through a few of those:
Poll and survey are effectively synonyms. Aggregates average together the results of multiple polls. Forecast models use that aggregate data to try to predict how likely it is that one candidate or another will win an election. This is an important distinction — polls themselves are not predictions about who will win an election. There's also a big difference between a poll that finds 58% of voters supporting a candidate and a forecast model that gives a candidate a 58% chance of winning (which is only a little better than a coin toss).
A change of X percentage points is different from a change of X percent. If support for an policy drops from 55% to 40%, it is down by 15 points, not by 15 percent.
A politician’s approval rating measures how well respondents think they’re carrying out their job. Their favorability rating measures how respondents feel about them personally. These two numbers are often similar, but not necessarily the same — think of someone who’s highly competent at their work but a total pain, or vice versa.
An oversample is a legitimate, useful polling tool that entails conducting additional interviews with members of a small subgroup to get clearer data on what they think. What an oversample doesn't mean is that the poll is biased in favor of that group — they aren't given any more weight in the overall numbers than they would otherwise have.
A push poll is not really a poll at all — it's a form of attack ad using the format of a survey in order to disseminate negative information, rather than an attempt to collect information. Polls that ask leading questions may be message testing —trying to see which arguments resonate most — or just biased or badly done, but they're not push polls.
Finally, I can’t emphasize this enough: all polls are inherently estimates, not precision instruments — and they’ll be most useful to you and to your audience if you treat them that way.