A good evaluation will meld qualitative insights with quantitative analysis to establish a strong case
for the recommendations being made. It will consider the utility value of its recommendations and
will be conducted in such a way that the recommendations are likely to be adopted and
implemented.
There are different ways of achieving these goals. Designing an evaluation requires some
decisions about approach:
- who should be involved
- the design to be chosen
- the objects of the evaluation
Then, some micro choices must be made in relation to data collection instruments and methods
(see Data Collection Options).
Independent Outsiders or Knowledgeable Insiders?
What relationship will there be between external evaluators (if any) and programme staff? These
questions are addressed in Who Should Evaluate.
Typical Evaluation Designs
Many evaluations could more easily be considered as professional opinions. A single person (or
possibly a small team) is given carte blanche to speak to stakeholders using a semistructured
interview, which might be conducted one to one or in a group. Evaluators will also have access to
documents of the programme.
On the basis of these, and even using the series of interviews to test their developing opinion,
evaluators prepare a report. The report can be submitted as is or, if there is time, tested with a
representative group of stakeholders before submission.
Such an evaluation stands or falls by the reputation of the evaluator. It can be done rapidly and at
limited cost. Because of the reputation of the evaluator, it can be done on the basis of a very
generalised terms of reference (TOR) document. The evaluators are chosen because they know
the field and the background to the programme, and because they are able to enter the world of the
stakeholders with ease.
An Audit
A related design provides the evaluation team with access to all documentary material from the
programme and conducts the evaluation entirely as a paper assessment. No interviews are
conducted other than with those who commission the report, and all information necessary to the
evaluator is considered to be available.
Such an evaluation can be extended by conducting general surveys based on preliminary
indications of areas of interest so that there is additional data gathering. But the primary sources
are documentary.
While such an investigation (perhaps of the voter education materials) can be useful, it can never
replace an evaluation of a programme in action.
A Disciplined Conversation
The most complex, and most participatory evaluation, is that which can most adequately be
described as a continuing discussion.
In such a design, the discussion begins with the development of the TOR. It may include the
establishment of one or more standing committees of stakeholders to assess the progress of the
evaluation, to discuss data and findings, and to dictate further research.
Evaluators typically play the role of group facilitators and technical assistants. They may also
manage the collection of information, but there can even be data collection by individual
stakeholders.
In such an evaluation, the final report is negotiated and can consist of a set of meetings at which
recommendations or proposals are not only assessed but put into action by the responsible bodies
or individuals.
How Close Should Evaluators Get?
Between these three typical designs lie many nuances, and each evaluation is approached by the
evaluation team in the manner most likely to yield reliable results. It is the nature of participatory
evaluation to become closely entwined with general programme implementation and to become
increasingly self-monitoring rather than summative.
In such participatory exercises, the role of evaluators can become a contested one. They are
outsiders with insiders' influence. Confusion can develop between evaluation and programme
implementation; evaluation or reflection on experience becomes primary.
In an ongoing adult education group, this can be appropriate, but in a national education
programme, it can become cumbersome and undermining of the general programme design.
Objects of Evaluation
Typically, an evaluation begins with a set of questions to which answers are sought. Such a list
can become more extensive as the evaluation proceeds, or it may be discovered that a smaller and
more concise list is sufficient.
These questions need to be generated in consultation with the organization that sponsored the
evaluation. The various stakeholders can frame their questions differently or ask different
questions, but the final list establishes the parameters of the evaluation and its objects.
By setting out a list of questions, the evaluation provides the first step toward utilisation of the
results. Relevance is predetermined, and ownership is partially guaranteed. The evaluators could
discover that, as a result of ignorance or intentional misdirection, stakeholders have not asked a
crucial question that they then add to the list 35.
Evaluators do so at their peril. They must provide their motivation for including such questions,
and may be accused of going beyond their brief. It will be up to evaluators to establish the
importance of the question for the outcome of the evaluation.