On Knowing what to Watch

September 9, 2008

Rick Shenkman, over at the WaPo, makes the obvious point that voters are stupid (h/t Ponnoru). He attacks a number of (supposedly) commonly held beliefs about how clever the electorate is, including the following:

Bill O’Reilly‘s viewers are dumber than Jon Stewart‘s.

Liberals wish. Democrats like to think that voters who sympathize with their views are smarter than those who vote Republican. But a 2007 Pew survey found that the knowledge level of viewers of the right-wing, blustery “The O’Reilly Factor” and the left-wing, snarky “The Daily Show” is comparable, with about 54 percent of the shows’ politicized viewers scoring in the “high knowledge” category.

So what about conservative talk-radio titan Rush Limbaugh‘s audience? Surely the ditto-heads are dumb, right? Actually, according to a survey by the Annenberg Public Policy Center, Rush’s listeners are better educated and “more knowledgeable about politics and social issues” than the average voter.

This result was kicking around the blogosphere back when the study was first released, and I remember thinking “Wow, I never would have guessed that!” But I suspect I must have out of sorts, because reading it again this morning I thought, “Hmm, this seems obviously false.” Further investigation was needed, of course, but I had compelling prima facie evidence that Jon Stewart’s viewers are better informed than Bill O’Reilly’s, namely that the latter group is willing to watch The O’Reilly Factor.

Before moving on, it’s worth noting that this has nothing to do with the politics of the two fan bases. One doesn’t suspect O’Reilly’s fans are less well informed because they’re conservative; one suspects this because his show is made for troglodytes. If we compared the knowledgeability of Public Interest readers with that of Cheech and Chong fans, we would expect the opposite result.

On to the study itself, which was conducted by the Pew Research Center, ‘a nonpartisan “fact tank”‘. Any organization that needs scare quotes to describe itself warrants suspicion. Casually skimming over its contents, the reasonably astute reader will almost immediately notice that the researchers received the data, analyzed it, drew their conclusions, and wrote them up, all over the course of about four hours of heavy drinking. That, at least, is the conclusion suggested by the principle of charity. Consider:

In fact, an experiment conducted in conjunction with this survey suggests that when people are given a “multiple-choice” version of key questions, the proportion who selected the correct response increased, sometimes dramatically. For example, only 36% were able to volunteer Putin’s name when asked in the February poll, “Who is the president of Russia?” But 60% correctly selected Putin when the question was asked this way in the test survey: “Can you tell me who is the president of Russia? Is it Boris Yeltsin, Vladimir Putin, Mikhail Gorbachev, or is it someone else?”  …  On other questions, the differences attributable to alternative formats were less dramatic. About three in- four (76%) were able to volunteer unaided that the Democrats controlled the House of Representatives. When on the test respondents were asked which political party controlled the House, followed by the prompt: “Is it the Democratic Party or the Republican Party,” 82% answered correctly, a six-percentage point

The results do not suggest possible explanations for the differences. Some of the gap may be explained by lucky guessing on the part of people who heard the correct choice along with some incorrect alternatives. Or perhaps asking people to volunteer an answer causes some to grow anxious and momentarily forget the right answer, or simply to say they do not know in order to hurry the interview along.

These are both fine theories, although I have my own suspicions about which factor was more significant. The folks at Pew were not able to come up with a theory for why the multiple-choice format was more helpful in identifying the president of Russia than in naming the party that currently controls the House; I will leave that as an exercise for the reader.

The methodology of the study leaves much to be desired. Respondents were asked 23 questions like the ones above about current events. The researchers split the respondents into three roughly equal groups based on how many questions they answered correct. Demographics were compared on the basis of what percentage of respondents in that demographic fell into each of these groups. This is not just a way demographics were compared, it is the way they were compared. Most comparisons are done looking exclusively at the highest group (the top 35%). To make it into this elite group, respondents had to answer 15 of 23 questions correctly. Since the more informed folks reported consulting more news sources, every source but one (morning television) did as well or better than the sample as a whole, with most doing significantly better. This also lead to a convergence in scores, with the top seven sources all within the margin of error of each other. (Those margins are fairly large for news sources consulted by only a small number of respondents.)

There are many more flaws with the study, but the bottom line is this: with more than half of both O’Reilly and Colbert/Stewart viewers lumped into the same category, it is very hard to conclude much at all about the relative mean information levels of the two groups. Furthermore, with the cut-off for the high information group at 15 our of 23, and questions like “what party controls the House of Representatives?”, the study provides very little information about what is being watched by people with more than a dim sense of what is going on in the world.

I am, as this post no doubt makes clear, not a statistician (although I am considering applying for a job as one at this place called ‘The Pew Research Center’). But it seems clear to me that very little of value can be reliably concluded from this study. That leaves me with my initial intuition. I suspect that the folks responsible for this study watch a lot of Bill O’Reilly.

UPDATE: I see that Pew released another of these studies last month. I haven’t looked it over very thoroughly yet, but it strikes me as odd that the WaPo article would use an older version of the story to make its point.

UPDATE 2: The newer report is better in some ways and has a better sample size, but was a much broader study and spent very little time on the question at hand. Respondents were asked only three questions, and the various news sources were compared based on how many of their viewers/listeners/readers answered all three correctly. Colbert Report (34%) and Daily Show (30%) viewers both outperformed the O’Reilly Factor crowd (28%) and the former result is possibly outside the margin of error (I’m not going to figure out each show’s sample size as I did with the 2007 report) but this is all pretty thin. Liberals gloating over the results of either study are being at least as disingenuous as the conservatives thrilled with O’Reilly viewers being only slightly more ignorant. As far as I can tell, the issue hasn’t been seriously studied.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: