ACE

Encyclopaedia   Media and Elections   Media Monitoring   Media Monitoring Methodology  
Quantitative Media Monitoring Methods

 

Quantitative media monitoring methodology is often described as content analysis. This has been an influential, but not universally accepted, approach to media studies for more than half a century.

The various criticisms of content analysis generally charge that such analysis entails imposing arbitrary and inflexible categories upon content which may in reality be subject to subtler interpretations. Hence, for example, content analysis takes no account of how an audience will understand a message conveyed through the news media. It simply undertakes a quantitative analysis of that message. Quantitative analysis implies the selection of elements of the content of media output that can be counted. In many examples of academic content analysis, the indicators selected may be words. Researchers will measure the frequency with which certain words, or combinations of words, feature.

Whatever the validity of the criticisms of content analysis, the fact is that it is often used in media monitoring in the context of elections. The analysis very seldom focuses on selection of words. Rather, monitors will identify and count one or more of the following variables:

 

 

  • Frequency with which parties or candidates are mentioned.
  • Length of time allocated to parties or candidates.
  • Frequency with which various other political or social actors are mentioned.
  • Frequency or time allocated to different topics.
  •  

     

There are a number of other variables that monitors might wish to identify. These might include: gender of cited sources, geographical origin of the story, the time that an item is broadcast, the position of an item in a news bulletin and so on.

Different methodologies will incorporate different indicators. The common characteristics of any well-chosen indicators, however, will be that they are reliable and valid.

Reliability means that there will be the same results, whoever the monitor is. In other words, there will be a scientific classification system that can be replicated in most instances. For example, measuring the amount of time directly spoken by a particular candidate is reliable. Classifying topics according to a predetermined set of codes is also reliable, provided that monitors are trained in how to apply that classification system and will usually – say 95 times out of 100 – yield the same result.

Validity means that the data gathered actually show what they are supposed to show. For example, a mere counting of the sex of the voices cited by the media is unlikely to be a valid measure of gender bias. Too many other factors would have to be taken into account: general social attitudes towards women, the gender distribution of candidacies in the different political parties, and so on. Likewise, the amount of time allocated to a particular candidate would not be a valid indicator of bias on behalf on the part of a media outlet. (Other considerations would need to be taken into account, such as the content of the coverage.)

Quantitative monitoringsome possible approaches

All quantitative media monitoring of election coverage is likely to focus on the time allocated to different parties or candidates. Exactly how this will be computed is a matter of choice, with various advantages or disadvantages to the differing approaches.

Many European media monitoring organisations – including the European Institute of the Media, the Osservatorio di Pavia and MEMO98 – use an approach that is predicated upon the frequency of mention of a number of predetermined “political subjects”. Each mention of these subjects within the monitoring period will be logged separately and the amount of direct speech times allocated will be recorded. Each mention will also usually be classified as positive, negative or neutral towards the “subject”.

A slightly different approach is not to count frequency, but to break broadcasting bulletins and publications into “items”. An item will normally correspond to a story within a news bulletin or a newspaper, or a political advertisement. All overtly identified sources for the item will be recorded, both by name and by category (such as political party). Direct speech times will also be counted. The entire item will be assessed to determine whether it favours and/or opposes any candidates or parties. The advantage of this method is that counting the number of sources for each item and evaluating their diversity gives an objective measure of the professionalism of media coverage. The disadvantage is that it does not strictly count the frequency of mentions of a party or candidate. Methodologies of this type are used by organisations such as the Media Monitoring Projects in South Africa and Zimbabwe and ARTICLE 19, which does media monitoring in Africa and Eastern Europe.

Each of these methodologies has to address the common problem of how to assess whether a mention of a political subject or an entire news item (depending on the exact methodology) favours or opposes a candidate or party. Some methodologies use a scale of assessment, in which the monitor places the item somewhere on a measure between +2 (very positive) and -2 ((very negative), passing through positive, neutral and negative.

There is clearly always going to be an issue of reliability. How will it be possible to ensure that monitors apply the same evaluation? This can only be achieved thorough training and practice. This will determine the margin of error in evaluating items on the scale.

Determining positive and negative coverage

The more fundamental problem, however, is how to apply objective criteria. It is important, first, to understand that evaluating whether an item or speech is positive or negative about a particular party or candidate is not the same as determining if it is biased. The measurement of bias comes only when it is possible to assess the aggregated measures of positive or negative coverage.

One effective approach is to use two sets of criteria in determining whether an item is positive or negative: context and content.

The first of these, content, refers to the way in which the story is framed. For example, if a story is about a politician appearing in court on charges of fraud, the frame is clearly negative. (Note that this has nothing to do with whether the story is accurate or fair.) If the politician is rather receiving the Nobel Peace Prize, the frame is positive. If the politician is addressing a political rally, the frame is most likely to be neutral.

The second criterion, content, refers to the overt facts and tone of the story. If the politician charged with fraud makes a particularly effective speech from the dock, this may be positive (despite the negative framing of the story). If the journalist says that the politician did not deserve to receive the Nobel Prize, this is negative, despite the positive framing. More commonly, of course, the framing and content coincide.

If context and content do coincide, then it is clear how the item will be classified. If the context and content lead to opposite conclusions (one positive, one negative), then they will cancel each other out and the item will be classified as neutral. If either content or context is neutral, while the other is positive or negative, then the latter will determine how the item is classified.

Other quantitative methods for evaluating coverage

In the second family of methodologies already described, there are a number of other quantitative indicators that can be used:

 

 

  • Number of sources is an indicator of journalistic professionalism. Distribution and balance of sources may indicate political bias.
  • Gender of sources may be a useful indicator if carefully interpreted.
  • Geographical distribution of stories may be significant in some instances.
  • Selection of topics is likely to be important.
  •  

     

The final point – selection of topics – may often be a sensitive quantitative indicator of the political inclinations of the media. Political parties usually campaign on somewhat different issues from their opponents. The selection of stories covered by the media will often suggest how far they subscribe to the political agenda of one party or another.

Statistics on sources say something about balance, but not automatically about bias. A one-source story is unbalanced, but it need not be biased. If the governor of the central bank announces a rise in interest rates, no other voice is required because it is a straight news item. (Good journalistic practice might suggest that a comment from the political parties and independent experts would be helpful.) On other hand, coverage of political violence that only quoted from one party would probably be biased.