Evaluators refer to formative evaluation, which is similar to monitoring. It is an activity that runs parallel to programme implementation and which seeks to refine and improveme the programme during the implementation phase.
Advertising agencies refer to tracking research: keeping the programme on track, or keeping track of the programme.
The tools used for such evaluation can be the same as those used in more traditional end of project or summative evaluation (see Measuring Impact). They also include more traditional managerial tools and those developed for dealing with individual educational events.
The difficulties faced by formative evaluation are not those of data collection. They are primarily those of establishing mechanisms for turning evaluation insights into programmatic innovation and amendment. This section deals with conflicts that might emerge, with the importance of the programme plan (see also Obtaining and Maintaining Commitment To the Plan) and with some monitoring tools.
Monitoring Must Keep Pace
Education programmes run on a tight schedule. Events and activities are planned in advance and decisions are made about production of materials, preparation of educators, and broadcast of information that are not easily altered. Face-to-face activities, in particular, require substantial lead times.
Evaluators must be closely linked to those managing the programme and must be able to analyse data received very rapidly if it is to make a difference. In addition, they need to have good knowledge of the programme objectives and purpose so as not to suggest courses of action that take the programme away from these objectives. All suggested changes need to enhance achievement of objectives, with one exception.
Extended programmes can build in mid-term assessments that are more thorough and can include an assessment of strategy and objectives. Any changes recommended as a result of feedback received have to be carefully considered, especially if there are contractual obligations based on the original programme objectives and outline. While it is relatively easy to make changes to how a programme is implemented, it is much more difficult to make changes to what is intended.
In some cases, this is necessary if the information being received suggests major problems with the programme design. Formative evaluation usually is concerned with refinement.
Conflict between Evaluators and Programme Managers
Even if there is a clear distinction between the evaluation and management teams, and even if the roles are carefully clarified, conflict can emerge between those who obtain and analyse evaluation data and those who are responsible for programme implementation.
Regular meetings, and a formal agreement about how to resolve such conflicts, are of assistance. Evaluators can be forced to note for the summative record recommendations that were not followed. At the same time, having had their recommendations followed can also cause conflict if the changes are not successful. Evaluators can suddenly find themselves evaluating programme innovations that they motivated, but which are not working.
The role of evaluators therefore becomes quite problematic unless they act as technical assistants providing unanalysed data directly to programme managers and assisting in the joint analysis of such data. This relationship is less open to conflict.
Working the Plan
The preliminary work done on the education plan is the most essential tool available for formative evaluation or programme monitoring. On a regular basis it is possible to check the extent to which the programme is being implemented and the extent to which changes in the context or amongst the target constituency require adaptations to the programme.
Having a plan documented also enables such changes to be recorded and noted for later general evaluation.
Tools for Monitoring
In addition to the general data collection tools discussed in this topic area, the client response or post-event reaction form is perhaps the most important.
Such response forms have been refined by commercial concerns and it is possible to use these as models of layout and brevity. When the audience is literate, it is possible to collect individual responses from events or particular services, process these, and use the results to fine-tune the programme. When the audience is illiterate, small group discussion and recorded feedback can serve a similar purpose.
Additional tools can include telephone complaints lines, peer assessment of educators, short surveys, and occasional gatherings of stake holders in focus-group-like discussions.
Role of Staff
The best source for information is a reflective and aware staff. Orientation to evaluation and monitoring concerns should be part of every training event, programme meetings should include opportunities for staff to communicate what they have learned and what they have discovered from their interactions in the field.
The purpose of formative evaluation is not to be able to say: "I told you so," but to make sure that the programme is appropriate and effective. Everything should be geared toward this end and the educators should create a learning environment both internally and externally.