Friday, March 5, 2010

Counting heads

This is a year for midterm elections, some primaries having already been conducted, so you can be confident of being battered with polling results from now till November. Like Satchel the dog playing “food, not food” in Get Fuzzy, you’ll want to be careful about what you taste.

Advice for reporters:

That 140-character tweet isn’t going to allow for much nuance, so plan on being more thorough in the full story. Keep in mind that your readers do not have the time, and often not the expertise, to evaluate opinion polls, so you are responsible for reporting them accurately. Ask the necessary questions.

1. Who sponsored the poll? If it is a genuinely nonpartisan organization, fine. But if it is a business or labor union or party/advocacy organization, you need to be cautious about the results, and so does the reader. Unless a turncoat slips you a copy, no campaign organization is going to reveal that its candidate is less popular than registered sex offenders.

2. Who conducted the poll? Was it an organization known to be reputable, with a history of reliable results? Or not?

3. How big was the sample, and who was in it? Too small a sample, or too narrow a choice of groups within the population, and the results will be highly questionable. Make sure that the respondents were randomly selected rather than a self-selected population like the people who participate in those worthless online or call-in surveys. 

4. How were the questions worded? Changes in the wording of questions can produce opposite responses from the same people. Loaded language will skew results. 

5. What’s the margin of error? The confidence level? Responsible polls report both these elements. If Candidate A has 42 percent and Candidate B has 40 percent and the margin of error is plus or minus 3 percent, Candidate A might in fact be leading, but you can’t say that for sure. Confidence level for results in the overall sample will almost certainly be very different from the confidence level for subgroups.

6. When was it taken? Attitudes can fluctuate widely during a campaign. A poll more than a few days old may represent views that have since shifted. And, generally, the more distant from Election Day, the less reliable the data will be in predicting the outcome.

7. Why aren’t you asking these questions?  There is nothing novel about these questions about opinion polls. Multiple sources tell you how to deal with polls ¾ much of the information in this post, for example, can also be found in the Associated Press Stylebook. So why are Associated Press articles and journalism in general so careless about repeating just about anything any pollster says?

Advice for readers:

You may not have the background to evaluate polling data, but you know enough to evaluate the articles about the polls. Be skeptical. If the article describing poll results doesn’t give you indications that the writer has done the homework described above, then you have no reason to trust the claims being made. And if the article makes exaggerated claims for the significance of the poll, you’d be well advised to be even more suspicious.

Written sources — newspapers, magazines, online publications — obviously have more scope to do the necessary level of reporting than broadcast television, though cable news operations will often describe polling data in some detail.

In the 2008 election season you could find people publishing averages of polls ¾ different surveys conducted by different organizations at different times for different populations with different questions, under the highly questionable assumption that mashing inconsistent data into a single lump provides a nugget of reliable information.

The word poll, originally meaning head, is very old; the OED records a citation from the late thirteenth century. So an opinion poll is a counting of heads. Just make sure that you don’t allow your noggin to be stuffed with dubious information. 


  1. If I was in the news business and wanted to establish my brand (one which enhances the value of my product) I would publish and maintain a list of the minimum information that must be supplied by a polling institution (a poller?) before I would give it any credibility in my reporting. I would continue to report low quality (by my standards) polling results but with the clearly stated proviso that we consider this result to be unreliable because certain methods and data were not released.

    Why report unreliable results? Because the guy down the street will report them and it's a competitive business. Combining the report with skilled analysis will increase the value of your product.

  2. Two additional points:
    1) When writing/editing stories that cite polls, keep in mind that people often lie to pollsters, even in supposedly anonymous polls. People want to look virtuous (most people claim they wear a seat belt at all times) or don't want to admit doing some things (few people admit to driving drunk). Some candidates supposedly do better in elections than in polls because many supporters don't want to admit to supporting them. So be skeptical.
    2) Don't allow politicians or other public figures to cite poll-like figures without details and sources. During the recent health care summit, one senator claimed only one in three Americans supported Obama's health reform proposal. What was that based on?

  3. Good advice there! As a current writer, and former survey research professional, I"m always popping holes in poll results. Questions are usually poorly written, samples are unscientific, and results are potholes.
    Thanks for this article!

  4. Rebecca HendricksMarch 5, 2010 at 1:49 PM

    A lot of newsroom people are terrified by numbers, in my experience. Start talking about margins of error and their brains shut off. So, yeah, the competition will report it (especially if it's on the wire) so we will too (Possibly unless a competitor sponsored the survey). I have no better explanation. (I had an otherwise educated and quite intelligent fellow copy editor look at me quite skeptical when I said converting a measurement in feet into inches involved simply multiplying the feet by 12. I kid you not.)

  5. It's really a shame that anyone already reading this blog probably knows and follows these guidelines already. Time to evangelize!

  6. By sheer coincidence, Jed Waverly wrote on this same issue yesterday, saying much the same thing:

  7. A large sample size does not automatically lead to good results - it is far more important to set up a good sampling scheme.