Summative evaluation is not only about measuring impact. Evaluation conducted at the conclusion
of a programme could focus on a range of different interests such as cost effectiveness,
appropriate educational materials, and efficient implementation strategies.
In general, however, people want to know what difference a programme has made. What impact
has there been on the institution, the learner, the target audience, and the programme beneficiary.
Even cost-benefit analysis presupposes some understanding of the impact that the programme has
had and the weighting of that against the cost.
But measuring impact of a social intervention is not easy:
- The targets move.
- Indicators are hard to determine.
- Controlling the inputs and processes is difficult.
- It is possible for the programme to be interrupted.
A fairground metaphor is useful. In a test of strength (and as a means of parting patrons from their
hard-earned money in the hope of impressing friends), fairgrounds often have a 'ring the bell and
win' game. It consists of a target to be hit, a hammer, and a long column with a bell at the top that
rings if the impact of the hammer on the target is hard enough.
In this exercise, there are a range of indicators (normally using words that start off
derogatory of the person swinging the hammer, encourage the person to try again, and then ring
the bell for a prize); there is a clear target; there is a very simple input (person plus
hammer); a range of potential processes (how does the person swing the hammer); and an
impact that can be measured (assuming that the whole game is not rigged).
Would that social processes, of which education is one, were so simple.
The Moving Target
In the first place, discussions of impact make some presumptions about the target. It is assumed
that the target is known, that the programme is directed at that particular target, and that the target
is subject to no other significant alternative programmes or learning processes.
But people are constantly moving and changing as a result of a range of different social activities
and opportunities. It is unlikely, in educational programmes such as voter and civic education, that
everything will be known about the public for whom the programme has been prepared. And
because programmes are conducted over time and space, it is likely that they will not only be
received by a particular specified target group but by others, and the target group may also shift in
size or scope as more becomes known during the implementation of the programme.
Despite these difficulties, the more that is known about the learner constituency, including the
development of a programme baseline (see Baseline Studies), the more that the learners are subject to a
single programme or that the evaluation takes account of all the programmes with whom the
learners come into contact. The more the evaluation is open to changes in the proposed learner
constituency, the more reliable will be the estimation of programme impact.
The Indicators of Impact
In the fairground, ringing the bell is an indicator of success and at the same time the objective of
the exercise. There is a very close correlation between programme objectives (described by some
as results statements) and programme indicators. If a programme is to be evaluated summatively,
then a set of indicators should be established at the same time as the objectives are framed.
The question that planners will ask are:
- How can we know that our objective has been achieved?
- How will we measure the achievement of the objective?
There may be a range of indicators for each objective, or alternatively one central indicator that
validates the whole set of objectives. Planners look for this silver bullet, but it is a chimera.
Spoiled and Invalidated Ballots and Voter Turnout as Indicators
Voter educators have been tempted to use the indicator of spoiled and invalidated ballots as this silver bullet. If
the objective of a voter education programme is to ensure that voters are able to vote, then it
appears to make sense that a low percentage of spoiled and invalidated ballots means that the educational
programme was successful.
But the percentage of spoiled and invalidated ballots can be affected by many other things:
- There may be a very good ballot design.
- Voters may be allowed assistance at the point of voting.
- The definition of a valid ballot may be very broad and vote counters may be generous in
determining acceptable votes.
- Ballot box stuffing or other forms of cheating can result in prepared ballots entering the box
rather than those of the uneducated or ill-prepared voter.
Or it may be that the voter education programme has had an objective of motivating people to
vote. An obvious indicator would be the percentage turnout.
As with the spoiled ballot, there could be a range of alternative explanations:
- The sun could be shining.
- The political contestants could have succeeded in developing a high interest in the issues or
outcome.
- The social significance of the election might be considerable.
- Material incentives or threats of retaliation may have driven up turn-out.
- Turn-out statistics may have been manipuated by election officials.
Evaluators should interrogate indicators carefully to ensure that they do indeed have significance
only for the measurement of the impact of the programme. Or they may have to, at substantial
cost and time, investigate whether these alternative explanations have validity and by a route of
exclusion, attempt to show the impact of the programme itself.
Indicators that focus only on the personal do not come to terms with the overall impact of the
programme on the society. Measuring the impact of the programme on the larger society
requires assumptions about cause and effect that can hardly be described let alone measured.
The Inputs and Processes
Establishing indicators is difficult. Controlling the inputs and processes that might lead to an
impact is even more tricky. Some of these can be invisible, or unintended. The plan may specify
certain inputs and describe certain ways in which the programme will be implemented (the
process). But successful completion may come about because of some unanticipated and
unreported variations.
Evaluators who are involved with programmes from the beginning or who use methods that
encourage participant observation and data collecting may notice these. An astute investigator may
notice anomalies that can be explained only by looking for the unrecorded, and may then be able
to find it.
But the bigger the programme, the more difficult it is to keep track of all the inputs and processes
and, through evaluation, ascribe impact to one or the other, or ascribe lack of impact to one or the other
except in gross or obvious terms.
A particular set of training events can be very successful in turning out effective trainers:
- This may be a result of the training programme developed and used.
- It may equally hinge on the personality and skills of the trainer.
- Participants were all effective trainers in their own right.
- Participants had been recruited from an organisation that gave them continuing support and
ongoing assessment of progress.
Possible Interruptions
In the 'ring the bell' game, the bell may not ring not because of the impact of the hammer but
because of the game owner's hidden anti-success contraption.
There are many possible reasons why a programme may not achieve its targets even though it does
everything according to the book. It is not always possible to understand these reasons or identify
them fully.
It is not the fault of the voter education programme if an election is stolen. Citizen apathy, despite
an extended civic education programme, may be caused by a political system which disempowers
rather than through a poorly developed or implemented programme.
In Summary
Establishing the impact of a programme requires basic knowledge about the target audience,
carefully considered and humbly stated indicators, the ability to make visible and reflect on
alternative reasons for the apparent success or otherwise of the programme, and extensive
knowledge not only about the plan but also about the actual implementation process.
And all of these are difficult to achieve even in a small and limited programme intervention. In a
large national programme, it therefore becomes extremely difficult to measure impact with any
degree of certainty.
Evaluators should, therefore, consider summative evaluations with some caution and develop a
range of ways of communicating the value of a programme rather than making glib statements
about the extent to which it has changed the world.