Many of the most pressing issues facing professional observers have come to light already. As discussed in Part IV, at the heart of observers’ work, and their most fundamental challenge, is arriving at an overall of the electoral process clearly and meaningfully but without oversimplification. This requires that observer groups judge how to weigh the relative importance of different parts of the electoral process, different obligations, and contextual influences in light of an increasing emphasis on standardized assessment criteria. This paper also has touched on challenges inherent in using and observing technology, following up on recommendations, avoiding the legitimization of undemocratic processes, and harmonizing donor approaches with long-term observation models. Three broad areas deserve further discussion:
- maintaining the quality of observation and adherence of observer groups to their mandate;
- developing useful standards for the observation of election technologies; and
- ensuring and measuring observation’s impact on democratic development.
Quality control
The professionalization of international observation along the lines laid out in the Declaration of Principles aims for high common standards among practitioners. Host countries and the international community should be able to trust the independence and methodological rigor of observers based on their endorsement of the Declaration of Principles. As the document has gained recognition, many new groups have sought to add their names. This has raised the question of variances across groups with respect to their degree of professionalism and methodological credibility. While the Declaration of Principles aims to raise standards for observation among all practitioners, and not to be exclusive, there is no mechanism in place for assessing or monitoring the quality of observation carried out by endorsers. There is no vetting process or trial period.
A separate question is whether groups blur the lines between various democracy-building activities. As funding for international observation has waned among some donors and in some parts of the world, groups have broadened their reach, trying to conduct observation as well as follow-up activities. While in principle this kind of holistic approach is logical, it complicates the traditional boundary between observation and assistance. Advising an EMB on procurement or a legislature on boundary delimitation precludes the possibility of assessing the performance of these bodies or their affiliates in a neutral manner.
The third challenge related to quality of observation is the criticism that observers are not accurate or critical enough in their assessments. Kelley, for examples, holds that the objectivity (and thus credibility) of otherwise rigorous and well-intentioned organizations can be hamstrung by inherent or implicit biases. As she points out, not all “biases,” or influences, are by definition bad, but an awareness of them is necessary if groups are to mitigate their effects and strengthen their credibility. [113] Kelley contrasts intergovernmental observer organizations with nongovernmental organizations, arguing that intergovernmental groups whose membership consists of less democratic states are least likely to criticize other less democratic states (perhaps to deflect criticism of member states’ own practices). [114] Second, she demonstrates that negative overall assessments are much more likely when fraud is overt and occurs around election day, while administrative or pre-election problems are less likely to trigger a negative evaluation. [115] While Kelley labels this the “subtlety bias,” it can be difficult to separate from observers’ intentional weighting of different parts and obligations. The third “bias,” already mentioned, is the influence of donor governments’ policy priorities – particularly where control of aid is concerned. [116] The fourth comes from the fact that while observer groups are impartial with regard to political actors, they are not without a stance: they are inherently pro-democracy. Because building democracy is explicitly a goal, Kelley writes, “When progress is partial but the election still falls short of meeting democratic standards, monitors may praise the progress, hoping their encouragement will help consolidate the gains.” [117] Observers may be tempted, in other words, to downplay the missteps of a country that appears to be advancing overall toward democracy, and whose authorities seem to have the will to guide the country thus. The fifth consideration is that of how assessments will influence stability in the host country. [118] It is important to note, however, that high-quality observation does not require desensitization from the impact of one’s statements. Where a statement has a good chance of inciting violence, the issuing organization may appropriately delay its release until the electoral climate calms.
Technology and public confidence
E-voting and related technologies have provided a new frontier in observation methodology. While harmonizing methodologies, groups must understand whether and how each type of technology is observable. “Technology” can range from biometric voter registration (BVR) to use of the direct recording electronic system (DRE) for voting, to mobile ballot boxes, to ballot scanning tabulation systems like optical mark recognition (OMR). As noted, they require a substantial degree of technical knowledge to evaluate, and any mission operating in a country employing voting technologies requires some specialized field staff. Yet observers also must be able to assess how aspects of the technology’s implementation correspond with core international obligations. Even if technology is soundly built, it is unlikely to inspire public confidence if its design and use are not transparent enough to be observed and assessed by impartial monitors. As ODIHR’s 2013 manual on new voting technologies (NVT) underscores, successful implementation of these technologies relies on voter confidence in election administrators. [119] Observers meanwhile cannot contribute to building public confidence in the electoral process if unable to draw informed conclusions about the integrity of technologies used.
In addition to the challenge of observability, new technologies bring overt risks of “technical failure, external interference…, internal malfeasance, and the loss of oversight by and the accountability of the election management bodies.” [120] One of the most serious concerns in this regard is the regulation and observation of development by the vendors who design, provide, and may advise on the implementation of technologies. In many cases, authorities look to foreign companies with expertise unavailable in their own country. Transparency or the perception thereof can be clouded by lack of information regarding the compensation of vendor staff or their design process. Regardless of where and by whom a technology is developed, even if by electoral management body staff, the observation of coding is extremely difficult.
Certification of voting technologies by experts is often discussed and some bodies, including the Council of Europe, have developed guidelines for doing so. The Council of Europe’s “Certification of E-voting Systems” includes a provision ensuring that all steps in the certification process are open to accredited observers. [121] But these guidelines are neither binding nor universal. The Council leaves the decision to engage a certifying body and the design to individual member states. However, the recommended principles against which certifiers would evaluate technologies can be used by observers even if no certification process is in place. If allowed full access, observers with the proper training can assess, for example, whether the system offers robust security, adequate protection of ballot secrecy, and a paper trail for verification of votes – as well as whether those assigned to operate the technology use it properly. A key challenge remains linking technology-specific lines of inquiry to the obligations-based assessment framework that guides reporting on the rest of the process.
Measuring impact
Once a mission has issued recommendations stemming from its observation of an electoral process, it faces the dual challenge of ensuring that they have an impact on the host country’s political development and of measuring that impact. As noted, formal follow-up activities to EOMs are a relatively new component of observation. They are hard to define for donors and to explain to host countries, absorb resources for an indeterminate period of time, and may lead to few tangible achievements. The difficulty of maintaining an international presence often results in responsibility for follow-up being transferred to citizen organizations willing to and capable of playing an advocacy role. If coordinated carefully, this entrusts local groups with ownership of their own political process. They should not feel, however, that they are being pressed to push an external reform agenda. Follow-up can take the form of both monitoring and advocacy, encouraging governments to implement suggested changes to bring practices into alignment with international standards, as well as reporting systematically on the status of such efforts. International treaty-monitoring bodies such as the U.N. Human Rights Committee, which tracks states’ compliance with the ICCPR, employ mechanisms for domestic and international NGOs to submit shadow reports containing their findings. These may influence the treaty body’s assessments. With a more prominent international profile, treaty bodies’ reports could exert continuous pressure on states to improve electoral practices even when not in the election-day spotlight.
Establishing the causal impact of observers’ reports and recommendations on behavioral changes among political actors is extremely difficult if not impossible. For this reason, most observer groups do not claim that their intended outcome is altering the way stakeholders act. Instead, they concentrate on shaping perceptions of national and international stakeholders. Perceptions are easier to gauge through public and private statements and opinion polls. Candidates are unlikely to report that they decided not to engage in fraud or intimidation, for example, because of the presence of observers or the embarrassment caused by their internationally syndicated statements. Nevertheless, international observation reports do sometimes have a direct influence on other states’ policies toward countries where elections were observed. In Madagascar in 2013, for example, a positive assessment of presidential elections was an explicit condition for the country’s re-entry into the international system after four years of isolation. Following a 2009 coup, sanctions had devastated the country’s economy, and the African Union suspended its membership. Reports by EISA, the EU, The Carter Center, and the AU confirmed a peaceful and largely democratic transfer of power in December 2013, resulting in the lifting of the sanctions and reinstatement of AU membership.
Determining the degree to which observation in general improves election quality is difficult also because, as Kelley states, observers’ decision to monitor a given election is not random. It depends both on their evaluation of whether a country has high potential for progress and on authorities’ willingness to host observers. In other words, “If the anticipated quality of an election influences whether monitors are present, then monitors may not influence quality at all, but merely respond to it. That is, monitors may simply go to elections that are more likely to improve.” [122]
Election observers want to bring about positive democratic change over the long term, not only deter misconduct for a single election cycle. Many observer recommendations require time to take effect. Legal changes can be slow in coming, but transforming the prevailing political culture, especially in countries with long traditions of repressive authoritarian rule, is even more gradual and may depend on a generational shift. Observers frequently monitor elections in a given country multiple times, giving them the opportunity to evaluate progress. Ultimately, the extent of observers’ influence does not follow a simple formula: sometimes countries implement rapid, major improvements; sometimes they advance in limited areas and not in others; sometimes very gradual change takes effect; and sometimes promising developments are followed by a backslide to authoritarianism or disorder. Yet patterns exist in terms of areas of the electoral process most likely to improve following observation, notably electoral laws, voter lists, and the efficacy and timeliness of polling procedures (related to training). [123] Closer and longer-term tracking of changes in electoral and political conditions in countries previously observed can help shape observers’ approach to recommendations and follow-up, as well as their understanding of their own strengths and weaknesses. The ACE Database of Recommendations, which compiles recommendations issued by all major organizations, as well as groups’ collective effort to articulate to whom their recommendations are targeted, have made this process more systematic and better suited to a long-range approach to democratic reform.
[113] Kelley, Monitoring Democracy, 60.
[114] Ibid., 65.
[115] Ibid., 66.
[116] Ibid., 70.
[117] Ibid., 71.
[118] Ibid., 72.
[119] OSCE/ODIHR, Handbook for the Observation of New Voting Technologies (Warsaw: OSCE/ODIHR, 2013), 5.
[120] Carter Center, Electronic Voting, 1.
[121] Council of Europe, Certification of E-voting systems (Strasbourg: Council of Europe Directorate General of Democracy and Political Affairs, 2011), 5.
[122] Kelley, Monitoring Democracy, 112.
[123] Ibid., 137-9.