Quick little cut and paste post. THIS is what the libs will try to use to counter the right's claims about liberal media bias. It's sort of a long read, but it really tears apart this study from Indiana University, as noted by the author's closing statement:
Around February 2009, Indiana University professors announced the results of a study concluding that, “A visual analysis of television presidential campaign coverage from 1992 to 2004 suggests that the three television broadcast networks -- ABC, CBS and NBC -- favored Republicans in each election.”
Consequently, I immediately contacted Erik Bucy, one of the authors of the study and an associate professor in the Department of Telecommunications of IU's College of Arts and Sciences. The study is titled, Image Bite Politics: News and the Visual Framing of Elections (Oxford University Press).
Professor Bucy was extremely polite to me via email, and even sent me Chapter 5 of his study so that I could determine how the study was coded. Coding is the means by which social scientists take a non-quantifiable variable, such as Bias, and make it quantifiable.
The reason why the coding for this study is so important is because coding often leads to absurd results. For example, we often read studies in magazines that proclaim that X city is the “Fattest” city. I am sure readers would be surprised to learn that this conclusion is rarely determined by gathering the Body Mass Index of a random sample of people, but rather by the number of workout gyms per Capita. I digress.
Summary of My Observations
Below I have carefully analyzed the data from the Indiana University study. Since Indiana University plans to conduct a similar study for the 2008 election, I run the risk of the Professors refusing to send me the coding from their next study. Nevertheless, I owe my readers a truthful evaluation of what I read, and I justify all my critical statements based on the data from the study. Where possible, I cite direct passages from Chapter 5 for support.
Overall, this study did not find statistically significant evidence to support the vast majority of the claims made by the authors. Some statistically significant data even contradict the authors' claims. In fact, the editorializing conducted in Chapter 5’s discussion section goes so far outside the four corners of the study that it borders on pure advocacy. As a result, most if not all of the the headlines seen around the Internet and on Television based on this study have no merit whatsoever.
When the public discusses media bias, what they are usually referring to is news content or news commentary. This Indiana University study, although briefly addressing content and commentary, focused heavily on the largely ignored “visual analysis” of television news content instead.
According to the authors, 50 years of research has determined that there is no scientifically significant evidence of liberal media bias. This is not surprising if one dives deeper to figure out how “Bias” has been coded over the last 50 years. According to the authors,
“The most basic and widely used measure of journalistic bias is volume of coverage. Volume in this context refers to how much media attention a particular party or candidate received.”
The fact of the matter is that volume of coverage is a terrible indicator of bias. What really matters is positive information presented about a candidate, and negative information that is omitted or “sugarcoated” about a candidate, not how many times one candidate is covered. This is especially true after the primaries when there are only two serious contenders for the Presidency.
While I concede that not one reporter visited John McCain on his trip to South America, and all three network news anchors traveled with Barack Obama to Germany, Republicans rarely argue that the media is not covering them enough. Rather, Republicans cite an enormous body of evidence that the mainstream media, with the exception of Fox News and Talk Radio, slant the news favoring liberal policies, programs and candidates.
Whether or not true content bias can ever be adequately coded is a very significant issue. For example, how would one adequately code interview bias? Below is an apple-to-apple comparison of Charlie Gibson interviewing Sarah Palin and Barack Obama
. (Note that some of the questions are paraphrased).
How does it feel to break a glass ceiling?
How does it feel to “win”?
How does your family feel about your “winning” breaking a glass ceiling?
Who will be your VP?
Should you choose Hillary Clinton as VP?
Will you accept public finance?
What issues is your campaign about?
Will you visit Iraq?
Will you debate McCain at a town hall?
What did you think of your competitor’s [Clinton] speech?
Do you have enough qualifications for the job you’re seeking? Specifically have you visited foreign countries and met foreign leaders?
Aren’t you conceited to be seeking this high level job?
Questions about foreign policy
-territorial integrity of Georgia
-allowing Georgia and Ukraine to be members of NATO
-Iranian nuclear threat
-what to do if Israel attacks Iran
-Al Qaeda motivations
-the Bush Doctrine
-attacking terrorists harbored by Pakistan
Is America fighting a holy war? [misquoted Palin]
Not only did Charlie Gibson favor Barack Obama, but it gets worse: The person who first coined the term, “Bush Doctrine,” (Charles Krauthammer) proclaimed that even he did even know the answer to the question about the Bush Doctrine
. So aside from Sarah Palin being treated more harshly, she was the unlucky recipient of a question to which there was no answer.
A typical media bias study looking at volume of coverage would never identify Charlie Gibson’s bias. In fact, a scientific study on media bias based on volume of coverage would conclude that Charles Gibson gave a non-biased interview! If the social science community would be honest with itself, their methods of observing media bias are simply inadequate.
Conversely, simple observation demonstrates an objective fact: Charles Gibson gave a significantly more difficult interview to Sarah Palin than Barack Obama.
Take two more examples: 1) In George W. Bush’s 2000 interview, one of his first questions was, “Who is the President of Chechnya?; and 2) A mid afternoon “Breaking News” headline on MSNBC, “How many houses does Palin add to the Republican ticket?” How would social scientists code these questions for bias?Volume of Coverage
already pointed out, the sample size for this Indiana University study was lacking to say the least:
They examined 62 hours of broadcast network news coverage -- a total of 178 newscasts -- between Labor Day and Election Day over four U.S. presidential elections between 1992 and 2004. Cable news outlets, including CNN and Fox News, were not included in their research. The professors are now looking at 2008 election coverage.
That's 62 hours of coverage over four election cycles, or less than 16 hours a cycle! Even within the last two months of an election cycle, the three networks would have 90 hours of evening newscasts (which would mean they're skipping more than 80 percent of the sample.)
Even so, for the sake of argument I will analyze the data below as if it was a good representative sample of the media, which it clearly is not.
For the Volume of Coverage
variable of the study, "Chi-square analysis were not significant either overall or for any of [the] four election years
." This meant that despite any trends favoring Republicans, all the results could have resulted by chance and the study demonstrated absolutely nothing! Nevertheless, this did not stop the authors for using the statistically insignificant trends in their Press release
to claim that there was bias in favor of Republicans:
Grabe and Bucy found the volume of news coverage focusing exclusively on each party -- one measure of media bias -- favored Republicans. Their research found there were more single-party stories about Republicans overall and in each election year except 1992. When they studied the time duration of these stories, no pattern of favoritism was evident.
The statement above has absolutely no merit since all the results were statistically insignificant. I will address later how the authors take additional statistically insignificant trends to editorialize entire sections of the study.Visual Analysis
In order to conduct visual analysis, several variables were tested. I outline several of them below.Visual Weight
was defined as, “A broad structural level assigned to stories about competing candidate…compared for emphasis and assigned importance.”
Visual weight also observed two subcategories: 1)Type of Story
; and 2) Story Position
Each of these subcategories was then broken down into a spectrum of smaller variables. One end of the spectrum was most favorable toward the candidate, while the other end was least favorable toward a candidate. For example, the Type of Story
that was most favorable was an interview, while a news anchor reading about a candidate from a teleprompter without a visual was least favorable.
Overall, the full spectrum progressed from reader, to voice-over, to voice-over-sound-on-tape, to package, to an interview.
Like the volume of coverage variable, "The differences between parties were not statistically significant.
” In fact, Democrats received more interviews! Nevertheless, this did not stop the authors from using the statistically insignificant trends that favored Republicans to justify their conclusions. It also seems incredibly suspicious that the authors ignored the interview trends that favored Democrats when making their final conclusions.
The Story Position
subcategory was broken down into whether a story about a candidate was the lead story, a story before the first break or a story after the first break. A lead story was determined to be the most favorable for a candidate.
Like before, “[S]tory position in the newscast, produced low counts in some cells; consequently, the Chi-square statistic was not applied in cross-tabulation comparisons.
" Therefore, there was no statistically significant evidence to justify a claim of story position bias. Nevertheless, the authors again used the trends favoring Republicans to justify their claims of bias in favor of Republicans.Packaging Techniques
was another variable that included editing techniques and camera angles that were favorable or less favorable toward the candidate.
Like the Visual Weight
variable, each subcategory was broken down into smaller variables.
Editing techniques included a variable, labeled “last say
,” meaning that the journalist allowed the candidate to have the last word when a story featured the candidate.
The authors' data for "last say" is very ambiguous. As I mentioned above, "last say" was determined based on the quantity of "last says" not
of a "last say". Yet, the authors seem to have measured both types of "last say" data and make it very unclear if both types are statistically significant. The results for duration
concluded that, "Republicans had the last say more often than Democrats, but only 1996 appear[ed] to be statistically significant.
The best I can tell from the data is that in 1996, based on the extremely small sample size, Bob Dole had longer, and possibly more,
"last says" than Bill Clinton. But, the researchers do something extremely odd with their data. They look at the statistically insignificant years and combined them with 1996 to conclude that the media was biased in favor of Republicans for every year.
Why the authors felt that Bob Dole getting more "last says" than Bill Clinton benefited George W. Bush against John Kerry leaves me speechless.
Another variable was labeled the “lip flap
,” where a candidate is speaking while the journalist is talking over the candidate’s muted audio.
The authors again state, "[N]one these differences were statistically significant
." Still the authors use the trends again to justify their conclusions and in their press release they state the following:
In their research, Democrats were more likely to be subjects of the "lip-flap" effect, while Republicans more often got the last word. GOP candidates were favored in terms of having the last say in all but the 2004 election. In 1992, the difference was distinctive with Republicans having the final say 57.9 percent of the time. In 1996, Republicans had eight times as many last-say opportunities as Democrats.
The above statement is again not entirely true because the only statistically significant difference was in 1996 for the"last say" variable. The researchers simply spread the 1996 data over statistically insignificant years to get a particular result for all the years combined. In fact, the researchers acknowledge little statistical support in the study:
Given the pattern of findings for these editing variables, it is reasonable to conclude that the networks have give a persistent advantage to Republicans over Democrats. Yet statistical support for this claim is spotty.
The camera angel
variables are self-explanatory (N.B: There are some camera techniques that I do not discuss, because they are also statistically insignificant
). Extreme close-ups and high angles were deemed to be less favorable when compared to medium shots, close-ups and low angles. The more distance from a camera shot also created a less intimate shot for the candidate.
Overall, in terms of camera angles, only 1992 demonstrated statistically significant differences that Democrats received more high angle shots
. However, it appears that when all years are tabulated together, and 1992 is spread across all statistically insignificant years, Democrats receive more high angle shots. Again, this is a pretty strange way to demonstrate more high angle shot for John Kerry by citing more high angle shots for Bill Clinton.
Whether or not Republicans received more low angle shots was not statistically significant
For shot length, it appeared that medium angle shots fluctuated over time, and since they are the most neutral type of shot, it is difficult to conclude bias either way. Yet, like before, the authors used statistically insignificant trends to reach baseless conclusions. In 1992, the only year where long shots were statistically significant, "George H.W. Bush and Dan Quayle were presented in significantly more long shots than Bill Clinton and Al Gore
." Even according to the authors:
"Long shots,...are not conducive to establishing rapport between candidates and viewers."
All other camera zooms were not statistically significant.
So overall, the only statistically significant data supported that long shots favored Democrats in 1992
, although Democrats were hurt in 1992 with more high angle shots. Republicans also received more "last says" in 1996.All other trends were statistically insignificant or where no real conclusion could be drawn
. Nevertheless, this did not stop the researchers from editorializing their own conclusions based on statistically insignificant trends. In fact, in their press release they say just the opposite:
"Republicans were seen least through the scrutinizing and unflattering perspective of an extreme close-up. This was the case overall and for all election years except 1996," they said. "Long shots . . . were move evident in coverage of Democrats than Republicans overall, but not at statistically significant levels."
That is not true; Democrats received less long shots than Republicans in 1992 at a statistically significant level. The authors ignore their own data!
Moreover, the extreme close-up data was not statistically significant.What Can Be Learned from This Study
Overall, the most that can be learned from this study is that after taking an extremely small sample of the mainstream media during election coverage:
1) There was almost no statistically significant data to support visual bias in favor of either party;
2) In 1996, Bob Dole benefited from having longer and possibly
more (see above
why there is confusion) "last says" on television. When all the years are totaled together, there appears to be a potential statistical significance of benefiting Republicans. But, it begs the question why combining one year of statistical significance with other years of statistical insignificance makes the bias in insignificant years more evident?;
3) Bill Clinton received more high angle shots in 1992, and when all the years are totaled together, there appears to be a potential statistical significance of high angle shots being used against Democrats. Again, it begs the question why combining one year of statistical significance with other years of statistical insignificance makes the bias in insignificant years more evident?;
4) In 1992, Democrats were shown in less long shots, which benefited Democrats.My Responses to the Authors' Editorializing
Lastly, I will cite direct quotes from the Authors' Discussion section to demonstrate blatant editorializing without any justification. Keep in mind that I disagreed with the entire discussion section, but chose only select quotes to make my point.
"Republicans emerged as the primary beneficiary of visual weight in all elections except 2000 and clearly benefited from the application of visual packaging techniques under journalistic control in all elections."
No, no data for visual weight was statistically significant. "Lip flaps" were also statistically insignificant. Only "last says" had any evidence of favoritism in 1996. Again this ignores the terribly poor sample size. Words like "clearly" are inappropriate.
"In 1996-a near-landslide election year for Bill Clinton-the networks' preferential treatment of Republicans (i.e., the Dole campaign) reached the highest level with consistent visible gaps between the two parties in terms of volume of coverage, visual weight, and visual packaging. This pattern is prominent and persistent enough to call network news coverage of the 1996 campaign biased."
Bill Clinton only received 49.2% of the vote. To call it a near-landslide is an interesting definition of landslide. Again, volume of coverage was statistically insignificant, as was visual weight, and only "last say" favored Republicans. If anything is biased, it is the authors' editorializing
"That Clinton won both the 1992 and 1996 elections despite this unfavorable visual treatment testifies to the former president's uncanny ability to connect with viewers televisual media."
In 1992, Bill Clinton only received 43.0% of the vote. Had Ross Perot not been in the race, he more likely than not never would have been President. Moreover, what scientific evidence from this study demonstrated Bill Clinton's "uncanny ability to connect with viewers." I see no such data at all! Since the authors have decided that no data is necessary to make claims, I will assert with absolute certainty that liberal media bias was the reason why Bill Clinton won.
"[George W. Bush] enjoyed more favorable treatment via camera and editing techniques that advanced the appearance of power and leadership. He also received less unfavorable camera and editing treatment than Kerry."
I see no significant evidence whatsoever of bias in favor of George W. Bush. None
. Unless the authors want to proclaim that Bob Dole helped George W. Bush by getting more "last says" in 1996, this statement is just baseless.
"What can be concluded from these [sic] data is that there is a persistent pattern of visual bias in network news coverage of presidential candidates and that this slant clearly disfavors Democrats."
The only way one could use the word, "clearly" is if the data was statistically significant. If one wants to look at trends that are statistically insignificant and meaningless, one also has to accept that Democrats received more interviews than Republican. Even so, commenting on statistically insignificant trends is not science.
"Given these findings, an important question to ask is how much visual bias is necessary to declare news coverage of elections clearly biased. Some would argue that only statistically significant differences between political parties should count. This approach seems appropriate in making assessments in one-shot studies of single election years. However, when persistent patterns emerge after examining multiple elections even if all point-by-point comparisons are not statistically significant, it seems justifiable and important to report these patterns as general tendencies. In our analysis, the overwhelming pattern of findings points to evidence of visual bias in favor of Republican candidates."
No, that is called data fudging! For most of the study there was no statistically significant data for individual years, so you spread significant years over the insignificant years to find meaningless significance. That is not how science works.
As I addressed above, Bob Dole having more "last says" that Bill Clinton is not evidence of George W. Bush having more last says than John Kerry. Either the media gave George W. Bush more last says or they did not. If it is not statistically significant at one point, you cannot combine it with other points to say all the points were statistically significant.
"Our observations of visual bias cut against the long-standing accusations of liberal media bias leveled during campaigns. Two explanations, one rooted in practice and the other in media ownership, deserve consideration here. First, because there is a long history of publicly accusing the media of liberal bias, journalists may overcompensate by remaining hesitant to present Democrats in a visually favorable light; at the same time, on account of the pressure they might be reluctant to apply unfavorable visual packaging to Republicans. Most likely, this happens at low levels of awareness and explains the subtle but persistent pattern of favoritism toward Republicans"
Your observations do no such thing. Most of the data was not statistically significant, and you ignored the insignificant data that favored Democrats (long shots and interviews). Even so, where is there evidence that this visual data was so overwhelmingly powerful that it outweighed the media's content bias, which was not captured by volume of coverage analysis? There is no data!
Given the long shelf life of the liberal bias accusation, it is indeed plausible that conservative pressure groups have succeeded in moderating the coverage that Republican candidates received.
How on Earth do you reach this conclusion from your statistically insignificant data? In 1992, there were statistically significant high camera angles harming Democrats, long shots harming Republicans, and in 1996, Bob Dole got a few more "last says." Based on that alone you managed to conclude a huge Republican media machine managed to thwart liberal media bias.
And I leave my favorite two quotes for last.
"Unfortunately, when it comes to claims about political coverage, "discussions of news media faults too often fail to distinguish criticisms based on unsystematic observation from those based on more solid evidence."
Too bad researchers have not realized that unsystematic observation is much more reliable than scientific studies, especially when researchers cite statistically insignificant data
. In fact, what is the difference between unsystematic observation and commenting on statistically insignificant trends?
[M]ost academic studies of bias, which have asserted null findings, have not had a noticeable impact on public debate."
The translation of this quote is that the researchers have noticed that volume of coverage studies
have tended to show no evidence of bias over the last 50 years (see above). The way the researchers claim that there is no evidence of bias is because there were null findings. Null findings are also known as, no statistical significance
. Yet, the researchers for their own study want to editorialize statistically insignificant data to invent findings of which there is no evidence!
Overall, none of the exaggerated claims being made about this study have any validity, yet this study will be used by Democrats to assert without justification that there is Republicans leaning media bias.
Even worse, when the next Indiana University study comes out covering the 2008 election, it will likely conclude that the media favored Republicans as well, even though it was the most biased coverage in favor of Democrats in the history of this Country. Hey, if scientists can make statements without statistically significant data, I might as well join in the fun, and call it science too.