Even before the endorsement of the Declaration of Principles in 2005, professional observer groups and critics of observation alike articulated a critical need to define rigorous assessment criteria. While “free and fair” remained the most frequently used benchmark, experts were uncomfortable with its inability to express nuance. As Elklit and Svensson testified as early as 1997, “The phrase ‘free and fair’ cannot denote compliance with a fixed, universal standard of electoral competition: No such standard exists, and the complexity of the electoral process makes the notion of any simple formula unrealistic.” [53] With this in mind, the Declaration of Principles conspicuously avoided use of the term. At the same time, while it urged endorsers to harmonize their methodologies, it did not provide more detail on what standards election observers should use.
The appeal of “free and fair” was that it was a blanket assessment of an election that was easy for the public to digest. By distancing themselves from “free and fair,” observer groups faced the challenge of finding alternative formulations to express their overall evaluation. The move toward long-term observation, and its pairing in most cases with short-term observation, increased the amount and diversified the types of data observers collected. Carroll and Davis-Roberts explain the fundamental question with which observer groups continue to grapple:
The most difficult challenge is to evaluate the extent and significance of observed problems during various stages of the election, and to assess the degree to which they fundamentally undermine the integrity of the entire election and the final results. Such analysis raises the core question of how much weight or value to give to various parts of the electoral process and the relevant obligations. [54]
Observers recognize that the quality of an election can be compromised during the pre-election campaign period or during post-election dispute resolution, just as on election day itself. To this end, the discrete parts of the electoral process have been identified to ensure that reporting addresses all aspects sufficiently. Acknowledging that any part of the process can be compromised, the question remains: are all equally important? While the models used differ slightly, observer groups avoid imposing a hierarchy, recognizing that context will determine the challenges of each election.
Logistical and financial limitations often force groups to make difficult choices about what to observe when they cannot observe everything. To best allocate resources, they must evaluate where vulnerabilities to manipulation or fraud are greatest and the relative degree to which different types of potential violations would undermine the integrity of the process. In some political or cultural contexts, observers may know in advance to devote resources to past problem areas, but often these are hard to predict.
Dissatisfaction with “free and fair” and the need for a more systematic framework suited to the rigorous observation of all aspects of the electoral process led to the emergence of a new methodological paradigm. State obligations under public international law had underpinned the values of observers since the start, but were not articulated as a comprehensive assessment framework. The establishment of the Declaration of Principles community in 2005 helped observation organizations further coalesce around an approach palatable to states themselves and which enables observers to make specific, compelling arguments regarding the fulfillment of universal human rights in the electoral process.
Public international law includes treaties, judicial decisions, political commitments, and other sources of best practice. U.N. treaties are the starting point for obligations-based frameworks because they are instruments that countries around the world have voluntarily agreed through ratification to uphold. Examples include the ICCPR and the International Convention on the Elimination of All Forms of Discrimination against Women (CEDAW) Where applicable, regional treaties such as the African Union’s African Charter on Human and Peoples’ Rights and the Council of Europe’s Convention for the Protection of Human Rights and Fundamental Freedoms are also important and may grant a greater degree of contextual specificity to obligations.
Treaties address rights and freedoms broadly but often lack detail that can help interpret how they should be applied in practice. In addition, new standards evolve that are not immediately codified in binding treaties. Interpretive documents, which include judicial decisions by intergovernmental courts (e.g., International Court of Justice, Inter-American Court Human Rights) and General Comments issued by treaty-monitoring bodies (e.g., U.N. Human Rights Committee), explain the intent of treaty principles. Political commitments are non-binding instruments that provide evidence of emerging norms. Examples include OAS’ Inter-American Democratic Charter and the AU’s Declaration on the Principles Governing Democratic Elections in Africa. Finally, handbooks and other works of established experts (e.g., EU Handbook for European Union Election Observation or Norwegian Helsinki Committee’s Manual on Human Rights Monitoring) provide evidence of best state practices and are often cited in international court decisions.
ODIHR, for example, which observes in OSCE member states, cites as its primary benchmark the 1990 Copenhagen Document, a political commitment that outlines standards for democratic elections in the OSCE and enshrines the role of observers in helping to uphold those standards. ODIHR’s framework, outlined in its Election Observation Handbook, also incorporates universal human rights instruments, including the UDHR, ICCPR, CEDAW, and the Convention on the Elimination of All Forms of Racial Discrimination (CERD). Human rights instruments of other regional bodies to which OSCE member states have acceded are also applied, such as the rulings of the European Court of Human Rights. The Handbook also acknowledges the value of non-binding documents that may be used for “providing guidance to participating States” and which “provide examples of how international or regional obligations might be carried out.” [55]
Observers are often asked how national law fits into an obligations-based assessment framework. Analysis of a country’s legal framework for elections and the conditions it provides for democratic governance is a crucial part of any mission. On one hand, observers do assess the extent to which laws are implemented during the electoral process. On the other, while ratification of a treaty commits a state to take the necessary steps to harmonize national law with its principles, this does not always occur. Some countries include in their constitutional framework a provision for automatically adopting international treaties as national law upon ratification; others make the legislature responsible for passing the appropriate legislation. In evaluating national laws, then, observers should highlight places where national law can be improved to bring it into alignment with a country’s international obligations.
While organizations now draw their criteria from essentially the same body of sources, some differences exist in how they translate them into a comprehensive assessment framework for observers to use. In each case, the goal is to define the essential characteristics of a democratic electoral process based on international obligations and to develop criteria for determining whether they are fulfilled.
The Carter Center and International IDEA, for example, have agreed upon 21 obligations that form the basis of their methodology. Every obligation is drawn from an international treaty (often the ICCPR) and supported by other sources. “Genuine Elections that Reflect the Free Expression of the Will of the People” is the framework’s overarching obligation, as it defines the essence of democratic elections. This language comes directly from Article 21 of the UDHR and was reinforced in the ICCPR. All fundamental rights must be broadly respected for the overarching obligation of Genuine Elections to be met. The Carter Center organizes obligations into three types: “foundational obligations,” related to the state’s responsibility to protect freedoms and rule of law; “process-focused obligations,” which deal specifically with the electoral process, and “individual rights and freedoms.” [56]
In 2000, representatives of the Electoral Institute for Sustainable Democracy in Africa (EISA) and the Southern African Development Community (SADC) Electoral Commissions Forum began drafting a document that outlined standards-based criteria for assessing each part of the electoral process, with particular reference to African regional instruments. Principles for Election Management, Monitoring, and Observation in the SADC Region (PEMMO) (2003) identifies between five and 12 key principles for each sub-part of the electoral process that taken together make up “free and fair, credible, and legitimate elections in a climate of peace and security.” [57] Criteria for “Constitutional and Legal Framework,” for example, require that it, “provide for the establishment of an independent and impartial electoral management body,” “provide for the regular scheduling of elections,” etc. [58] The former corresponds in the Carter Center framework with “Freedom from Discrimination and Equality Before the Law” and the latter with “Periodic Elections.”
OAS starts with four key descriptors (“attributes”) of a democratic process. According to A Manual for OAS Electoral Observation Missions, they consist of:
Each attribute is broken down into two “components.” “Competitive Elections,” for instance, comprises (1) “Right to run for office,” and (2) “Basic guarantees for an electoral campaign.” Some break down further into “subcomponents”: “Basic guarantees for an electoral campaign” includes freedoms of association, assembly, expression, and movement; access to information; equal playing field; and security. Each component or subcomponent is then translated into a question (“issue at stake”) which, if answered in the affirmative, verifies the presence of democratic attributes. The question “Are there unreasonable hurdles to become a candidate?,” for instance, corresponds with “Right to run for office,” which is one of two components of “Competitive Elections.” [60]
ODIHR’s Election Observation Handbook excerpts clauses of the Copenhagen Document that explicitly relate to elections. Because ODIHR’s mandate largely derives from Copenhagen, these principles can stand alone as obligations to a large extent. ODIHR does, however, designate eight criteria delineated in the Copenhagen Document and other source documents: (1) Periodic elections; (2) Genuine elections; (3) Free elections; (4) Fair elections; (5) Universal suffrage; (6) Equal suffrage; (7) Voting by secret ballot; and (8) Honest counting and reporting of results. [61] “Free and fair” appear as only two of eight characteristics, not as all-inclusive descriptors. Each category comprises specific elements: “Free elections” includes, e.g., freedom of assembly, association, expression, and movement.
The assessment frameworks used by The Carter Center, EISA, OAS, ODIHR, and other organizations differ primarily in their structure, not in their criteria. Some organizations use a larger number of obligations linked to specific treaty stipulations, while others synthesize the essence of the source documents and assign qualitative labels. It is easy to identify the common principles, however. For example, the elements of the OAS’s “Basic guarantees for an electoral campaign,” a component of the “Competitive Elections” attribute, roughly equate to the criteria assessed under ODIHR’s “Free Elections” category, as well as to individual Carter Center obligations (“Freedom of Association,” “Freedom of Opinion and Expression,” “Freedom of Assembly,” etc.).
Just as it is not possible to establish a hierarchy of importance of different parts of the process, “the relative significance of obligations,” Carroll and Davis-Roberts note, “is inextricably tied to the local context.” [62] In some cases an obligation may be violated without critically undermining the integrity of the process. While the principle of secrecy of the vote, for instance, is widely agreed upon as a democratic standard that protects voters from intimidation, voters in some societies report a lack of concern about more public forms of voting. [63] While observers should note the absence of ballot secrecy, they need not underscore it as a detriment to the free expression of voters’ will. When gauging the relative significance of violated obligations in any context, observers also must consider whether the margin of victory is narrow enough that the failure to uphold a given obligation could have spoiled the process.
Existing obligations-based frameworks cannot tackle every issue related to the electoral process. Some standards are still evolving or remain undefined. OAS notes abstentionism, compulsory voting, and certain aspects of boundary delimitation and allocation of representatives as examples. [64] As the body of international law grows and observers and human rights groups work more closely together, these issues may be addressed over time.
An obligations-based assessment framework is most useful when groups can find ways to collect data that correlate reliably with those obligations or standards. One basic tool of observers in this regard is the election day checklist. Short-term observers (STOs) use these forms to record their findings at each polling station where they observe. Since Garber included a sample checklist in his 1984 handbook, forms have become more sophisticated but have retained common elements. Observers still record when they arrive and depart, whether ballot boxes are properly sealed, whether voters are intimidated, and whether the setup of the polling station preserves secrecy of the vote, for example. But most groups now use different checklists for different parts of election day, including opening procedures, polling, closing and counting, and tabulation of votes. Groups also customize questions for particular country contexts. A question that asks about unauthorized persons present in the polling station, for instance, will reflect national laws on the subject and will exclude “security personnel” if no restrictions on security personnel exist. The range of topics covered also has expanded. Observers may collect information on numbers of female polling officials and other indicators of gender inequality, as well as on accommodations for speakers of minority languages and voters with disabilities.
The most important issue guiding checklist design, however, is how to collect accurate information that can be compared meaningfully from a large number of observers reporting from different locations. Forms are now engineered to elicit the most specific and objective information possible, allowing less space for individual observer bias. What was once asked as “Are voters identified as prescribed by law?” [65] might expand to a series of questions that guide observers through discrete procedural steps (e.g., “Did the PEC [precinct election commission staff] check the voters’ IDs?” “Did the PEC sign and stamp the ballot?” “Did the voters sign the voter list?”). [66] Detailed checklist questions help focus observers on specific aspects of what can be a chaotic process, but they also enable staff analyzing reports to identify where procedures are breaking down or where laws are consistently violated. Most checklist questions today also give observers a range of answers to specify the frequency of observed irregularities.
A shift away from open-ended questions to yes/no and multiple-choice questions is another critical methodological development. A question such as “How were unused ballots disposed of?” is better worded as, for example, “Were the voter list, unused ballots, and spoiled ballots packed in separate envelopes and sealed?” Open-ended questions complicate data analysis and synthesis, making it difficult to reach meaningful conclusions about the fulfillment of obligations. They also give observers leeway to evaluate the process subjectively and potentially miss crucial data points. However, most forms do instruct observers to elaborate on irregularities they have observed so that reports of misconduct can be traced and reviewed later. Expert staff then review the data as a whole and identify patterns.
While observer groups continue to tweak their checklists to extract ever more precise data, significant challenges remain. The most important one is the development of an overall evaluation question that provides clear and reliable information regarding the general quality of the process at each polling station or tabulation center. ODIHR, for instance, asks: “The overall conduct of the voting of this polling station was: (1) Very good; (2) Good; (3) Bad; or (4) Very bad” [67] and The Carter Center currently asks: “What was the overall assessment of the election environment and process at this station? (1) Very good; (2) Reasonable; (3) Poor; (4) Not credible.” Determining the best way to word this question in order to receive consistent and reliable results is a microcosm of the methodological challenge discussed at the beginning of this section: How do we weigh different parts of the process in order to arrive at an overall assessment? Some scales may elicit more or less reliable information than others. But how bad is “very bad”? Or, more challenging, what makes a polling station good enough? Observers themselves may view violations they have witnessed more or less seriously depending on comparative personal experience or preconceived dispositions toward the process.
The assessment frameworks of almost all major organizations include aspects of the electoral process that require long-term observation. Long-term observers (LTOs) contribute qualitative analysis to a mission, helping establish the electoral context. LTO reports, in contrast to short-term observer (STO) checklists, involve long-form answers to open-ended questions. Therefore, LTOs must have a stronger grasp of standards and obligations than their short-term counterparts. While organizations are still developing ever more effective ways to correlate LTO reports with specific assessment criteria, a standard range of tools already exists. Most missions require LTOs to compile weekly reports synthesizing findings in their areas of responsibility, providing information on meetings with political stakeholders, campaign events, and any pre- or post-election procedures observed (e.g., voter registration, voter education, poll-worker training, dispute adjudication), as well as identification of potential problem areas and gathering of documentation. Obligations and assessment criteria should shape the language used to guide LTOs in their work.
In addition to the comprehensive weekly report, many groups use variations on three types of shorter LTO reports as needed: campaign rally reports, flash or incident reports, and case files. A campaign rally report collects data on attendees, any instances of intimidation or violence, respect for freedom of assembly and expression, and the tenor of rhetoric. Flash or incident reports capture urgent developments between weekly reports, often including violence or sudden changes in election rules. Case files track complaints and legal violations related to the electoral process. As ODIHR’s LTO manual notes, core team staff may compile case files into a database to track systematic problems. [68]
OAS missions use another type of form, the Document of Indicators on the Electoral Process, which is the responsibility of a designated Indicators Specialist and focuses directly on aligning the electoral environment with obligations in the assessment framework. This document concentrates on the compliance of the country’s own legal framework with its international obligations but incorporates reporting from other core team members and deployed observers. [69]
Comprehensive EOMs generate hundreds or even thousands of checklists on election day. Collecting and analyzing checklists from around the country at headquarters after the polls close in order to reach meaningful and statistically significant conclusions, all in time to release a preliminary assessment and hold a press conference within a day or two of voting, is incredibly demanding. This pressure has inspired the development of faster observation technologies. Several observer groups, including The Carter Center, National Democratic Institute (NDI), Democracy International (DI), and others now use mobile technologies in the field for missions. Each has three basic components. First, staff design checklists online, similar to those on paper. Then, observers complete and submit their checklists through mobile platforms adapted for smart phones or tablets. Finally, software aggregates and maps results, enabling headquarters staff to follow observer reports in real time.
Observation technologies allow staff to identify problems as they occur and focus their attention on drafting statements instead of coding checklist responses. They also provide quick yet powerful tools for presenting the data collected in tables, graphs, charts, or maps. In addition, some programs can prompt observers to correct errors in their submissions, and broadcast messages or edited checklists from headquarters to teams already deployed in the field.
The Carter Center piloted its mobile observation technology, ELMO (Election Monitoring), during the 2011 elections in the Cherokee Nation and Liberia. [70] The Center then developed expanded functions for full operational use by STOs in 2012 and 2013 in Egypt, Libya, Kenya, and Nepal. ELMO, like the technologies used by several organizations, builds its user (observer) end on Open Data Kit, a customizable, open-source form collector for tablets.
Using technology for observation can be challenging where the technological infrastructure is limited. Missions should be well aware of conditions before deployment and have contingency plans in place. ELMO mitigates the risk of losing connectivity by allowing observers to save their forms to their tablets or to USBs until access is available. Staff also equip observers with emergency paper checklists.
Another alternative where connectivity is limited is the submission of forms by SMS. SMS reporting by 750 citizen observers in Indonesia in 2005 as a replacement for unreliable land line phones, then by international observers in an NDI pilot project during the 2006 Palestinian Legislative Council elections, sparked a sustained NDI initiative aimed at supporting citizen observers’ more widespread use of SMS reporting. [71] As Ian Schuler notes, SMS technology provided the first opportunity for observer data to be delivered directly from the observer into a database, without the cost or labor of an intermediary in a call center to receive and record the information. [72] While a predecessor to tablet-based reporting, which allows for more information to be viewed and communicated quickly, SMS remains the most effective option in many less developed countries.
The next phase of technological developments in observation will bring greater integration of platforms with methodological resources, allowing observers, for example, to link relevant obligations to each checklist question. User interfaces will become available in a variety of languages and with accommodations for users with disabilities, making them effective tools for more international and citizen observers in more places. Observer groups also must determine how best to harness form-based technologies for long-term reporting.
The codification of observation methodology in the form of online and published materials is essential not only for building consensus, but also for training observers and ensuring that their work is transparent and understood by stakeholders. ODIHR, EU, and OAS have published comprehensive manuals for observers that combine methodological and practical instructions for carrying out a professional mission. Documents like ODIHR’s Election Observation Handbook (6th ed., 2010), the EU’s Handbook for European Union Election Observation (2nd ed., 2008), and OAS’ Manual for OAS Electoral Observation Missions (2009) help ensure consistency across missions and build a common body of knowledge among observers and staff. They also can serve as guides for newer organizations, both international and national, interested in developing observation programs.
Since EISA and SADC published PEMMO (discussed above) in 2003, several other organizations have expanded the effort to provide observers with useful guides on the use of international obligations. The EU, for instance, has published the Compendium of International Standards for Elections, which offers observers step-by-step instructions for applying public international law in their analysis and offers a printed matrix of countries’ ratification statuses. In 2010, The Carter Center launched an online Database of Obligations (re-designed in fall 2014 as the Election Obligations and Standards Database, or EOS). EOS consolidates text from nearly 200 sources of public international law, providing summaries of the democratic standards established in the source documents and linking each to relevant parts of the electoral process. Observers and core staff, assistance providers, researchers, and citizens can search by obligation, election part, country, authoring organization, or keyword, extract relevant quotes, and know the source from which those quotes are derived. Observers can then make immediate use of the texts in their reports, ensuring that international law directly supports their assessments. The database’s companion handbook, Election Obligations and Standards: A Carter Center Assessment Manual, breaks down the 21 obligations into detailed, topical assessment criteria for all aspects of the electoral process.
A significant body also exists of thematic manuals that detail methodologies for observing specific components of the electoral process. In 2001, ODIHR and International IDEA drafted the first handbook on electoral legal frameworks, Guidelines for Reviewing a Legal Framework for Elections. In 2013 ODIHR released an updated edition of the Guidelines to reflect new case law and the emergence of electronic voting systems. The goals of the manual are to enable observers to identify gaps in legal protections and to guide authorities to improve electoral legislation. [73] Separately, IDEA released International Obligations for Elections: Guidelines for Legal Frameworks in 2014. Observer groups also have recognized the importance of monitoring areas such as media, campaign finance, gender equality, and voter registration. Methodology for Media Observation during Elections: A Manual for OAS Electoral Observation Missions (2011) articulates the objective of media observation as “[analyzing] the existence or not of conditions of equity to compete in an election, observing, for example, access to media outlets by political actors…” [74] This handbook, as well as ODIHR’s Handbook on Media Monitoring for Election Observation Missions (2012), discusses how to collect and process media coverage, looking for balance of coverage as well as at voter education content. ODIHR likewise has published manuals on monitoring women’s political participation and voter registration, and OAS on campaign finance.
The advent of electronic voting and related technologies in the last decade has presented the greatest challenge to observation methodology because standard methods are often inapplicable. Observers working in countries using technology need to ask new types of questions. Some may be obvious, such as: Is there a contingency plan in case of equipment failure? But observers also need a sufficient technical understanding of the technologies used to assess independently whether systems are secure from hackers, for example, or whether ballot verification measures (such as a voter-verified paper trail, or VVPAT) are functioning properly. The Carter Center first published its Baseline Survey for Observing Electronic Voting in 2007, after testing it in Venezuela’s 2006 presidential election, in an attempt to equip observers with a comprehensive set of tailored questions for stakeholders. [75] The Center’s Handbook on Observing Electronic Voting (2nd ed., 2012), as well as ODIHR’s Handbook for the Observation of New Voting Technologies (2013) and OAS’ Observing the Use of Electoral Technologies (2010) outline the specialized expertise necessary for a mission involving observation of e-voting or even e-registration and extrapolate established obligations to technological applications. Most recently, in 2013, NDI and IFES released Implementing and Overseeing Electronic Voting and Counting Technologies, a guide geared both to electoral management bodies implementing technologies and to observers, political parties, and civil society seeking to understand and monitor their use. Drawing on real case studies, the manual addresses legal issues, timelines for implementation, sustainability, trust and integrity, and data security, as well as the extent to which technologies reflect international standards of transparency, accountability, and ballot secrecy. [76]
As the volume of printed resources is now immense, some groups have experimented with moving parts of their observer training online. ODIHR, for instance, has developed a free, four-module, web-based course for STOs that introduces them to OSCE commitments and other relevant international standards; their mandate, role within the mission, and daily responsibilities as observers; and the Code of Conduct. [77] THE EU invites STOs to complete a six-module course on “Safety and Security in the Field,” and IFES produced a 30-minute video tutorial on Ukrainian election procedures specific to international and citizen observers deploying for the country’s May 2014 presidential vote. [78]
The Declaration of Principles emphasizes long-term observation with enough depth and breadth to meet certain criteria: it must be “process oriented”; report periodically, accurately, and impartially; publicly announce its mandate; and employ observers without conflicts of interest. [79] A mission must be “of sufficient size to determine independently and impartially the character of election processes… and must be of sufficient duration to determine the character of all of the critical elements of the election process in the pre-election, election-day, and post election period.” [80] At the same time, the Declaration of Principles makes clear that observer groups might decide to deploy limited missions, but in this case they must identify appropriate foci and neither overstate their parameters nor draw conclusions about parts of the process they did not observe. As long as groups fulfill these conditions, the Declaration does not require that a specific mission model must be used.
Financial resources, organizational size and support capacity, and group profile are the most important factors that drive organizations’ mission structure and methods. Each organization has a comprehensive election observation mission model when conditions are conducive to a robust observer presence throughout the country and for an extended period of time. When considering deployment of a mission, observer groups generally send a small assessment mission of headquarters staff and/or country experts a few months to a year before elections to investigate the pre-electoral environment. They determine whether a mission would be a smart investment of resources and gauge the host government’s willingness to welcome observers. Intergovernmental organizations usually have existing diplomatic ties in the host country. In the EU’s case, European Commission officials based in the host country are able to lay the groundwork for a mission before observers arrive, [81] and ODIHR has a standing invitation to observe in all member states.
A standard comprehensive international observation mission comprises a core team of managers and topic experts based in the host country’s capital, long-term observers, and short-term observers. The size of delegation, division of responsibilities, and average duration of stay vary. ODIHR deploys some of the largest missions, with a core team of 10-15 international experts. The team, which begins operations in the host country six to eight weeks before election day, includes a head of mission and deputy, reporting officer, election analyst, political analyst, legal analyst, media analyst, statistical analyst, LTO coordinator, and occasionally a specialist in women’s participation, national minorities, e-voting, or campaign finance. [82] Particular to ODIHR as an arm of the OSCE is the parliamentary liaison officer, who coordinates with parliamentarians from the OSCE Parliamentary Assembly, Parliamentary Assembly of the Council of Europe, or European Parliament who also are observing. [83] EU missions appoint a chief observer who is a member of the European Parliament, though the mission is independent from the EU. [84] Otherwise, EU teams have a similar composition to those of ODIHR, and sometimes include a human rights expert to cover women, minorities, and disabilities together. [85] All organizations require operational support in the areas of security, logistics and procurement, and finance, usually provided by both international and local staff.
The Carter Center, as a smaller, nongovernmental organization, deploys core teams with a similar composition but fewer members: at minimum, a field office director, observer coordinator, legal analyst, and security manager. Experts may play multiple roles, e.g., an LTO coordinator with a statistical background who is able to analyze as well as collect data from observers. The Carter Center, NDI, EISA, and OAS all recruit former heads of state or similarly eminent individuals from the host country region to lead their missions on election day. This brings visibility to observers’ findings and demonstrates keen international interest in electoral conduct.
The OAS model largely employs its own specialist staff from the OAS General Secretariat Department of Electoral Cooperation and Observation (DECO) as core team members instead of consultants. Aside from the non-staff chief of mission, core teams include a deputy chief, general coordinator, press specialist, electoral organization specialist (the EMB liaison), legal specialist, electoral technology specialist, electoral analyst, indicators specialist, and operational support. [86]
International observation missions nearly always employ a handful of staff who are citizens of the host country. While national staff cannot be accredited as observers, they often play an integral role in advising international experts on local context and laws, interpreting, and monitoring media.
Most organizations deploy long-term observers (LTOs) to the host country shortly after the core team establishes itself, sometimes as soon as one week. There is no set number of LTOs that must be present to constitute a comprehensive mission. However, the combined number of short- and long-term observers needed for sufficient coverage ranges from 10-50 (EISA) to “16 or more” (Carter Center), to more than 100 (ODIHR and EU missions). The precise number generally is contingent on factors such as country size, number of polling stations, political structure, and electoral system. LTOs must commit to as long as several months in country and deliver informed, analytical weekly reports based on daily meetings with candidates, political parties, regional electoral officials, civil society actors, and security personnel during this period. They also often scout a deployment plan for STOs assigned to their areas of responsibility.
Observers may be recruited through open calls seeking country expertise, civil society background, or elections experience, or through rosters of trained individuals maintained by the intergovernmental organizations that use them. In ODIHR’s case, OSCE member states second observers to missions, though efforts are made to recruit and fund a diverse pool of observers from states that do not participate regularly in this process. [87] Organizations usually deploy LTOs in pairs of mixed nationality and gender to a representative sample of regions nationwide. Leading groups define the duration of long-term observation as averaging between six weeks and six months (though it may last longer if elections are delayed or results are disputed), with some organizations such as The Carter Center preferring a minimum of three months for long-term observation.
During a comprehensive mission, short-term observers complement long-term observers’ analysis by collecting data from a large number of polling stations on election day. As with LTOs, efforts are made to recruit STOs with diverse nationalities and with a gender balance.
In many cases, short-term observation is also an opportunity to bring citizen observers from neighboring countries or those with upcoming elections to observe with an international mission. EISA, for example, frequently recruits representatives from African civil society organizations and members of EMBs to serve as STOs. [88] Unlike LTOs, STOs are not present long enough to establish relationships with stakeholders. Instead, they must be accurate, efficient, and able to evaluate what they observe in and around polling stations using mission checklists.They remain in the host country for one to two weeks immediately surrounding an election. After training with the core team, they often have one day to familiarize themselves with their areas of responsibility (AORs) and meet with nearby LTOs. STOs remain a few days for counting and tabulation but do not stay indefinitely if the announcement of final results is protracted.
Long-term observers and core team members may stay on for weeks after voting ends. After releasing a preliminary statement within several days of voting, LTOs and staff work on a longer and more detailed comprehensive final report that incorporates analysis of pre-election conditions, voting procedures, and post-election developments, including electoral dispute resolution, post-election violence, and political dialogue. Most organizations issue these reports within two to three months of the elections. Many also send post-election delegations to convene stakeholders for roundtable discussions of its mission’s recommendations. [89]
Comprehensive missions demand extensive financial and human resources to support a robust long- and short-term presence. Limited missions, in contrast, adapt rigorous obligations-based methodology on a smaller scale, often with a more modest number of observers who have a clearly delineated regional or topical mandate. For example, in some instances, NDI and others observe pre- and post-election developments, eschewing direct observation of polling. ODIHR’s Limited Election Observation Mission (LEOM) model has the same duration and structure as a comprehensive mission without STOs. One might deploy if “serious and widespread problems on election day at the polling-station level are unlikely, but… observation of the entire long-term process throughout the country might still produce useful recommendations” or, conversely, when major flaws are expected but key political forces express interest in recommendations to move their country forwards. [90] ODIHR’s other limited model is the Election Assessment Mission (EAM), which focuses on a specific issue such as minority or women’s rights, campaign finance, technology, electoral dispute resolution, etc. In this case, the organization does not deploy observers outside the capital, but core team members (of whom there may be as many as 12) leave headquarters in pairs to collect information from the provinces. [91] The duration can be as short as two weeks and the mission will issue only a final report. [92] Again, the impetus to deploy an EAM may be positive or negative: either confidence in political pluralism is already strong and does not merit the expense of a large delegation, or a genuine choice among candidates is clearly absent but ODIHR wishes to maintain open dialogue with major stakeholders. [93]
Organizations sometimes employ other variations of a small, “limited” mission. For example, The Carter Center’s Election Experts Mission and ODIHR’s Election Expert Team (EET) rely on the analysis produced by a small team of experts in the field. These missions may last no more than a couple weeks or may be an outgrowth of an established field office monitoring a protracted transition. EISA’s model is the Technical Assessment Mission (TAM), which lasts a minimum of 10 days. A TAM consists of six or fewer analysts with specific topics of expertise. Generally these individuals are permanent EISA staff or on the group’s expert roster. [94] EISA TAMs issue a final report but no preliminary statement.
[53] Elklit and Svensson, “What Makes Elections Free and Fair?” 43.
[54] David J. Carroll and Avery Davis-Roberts, “The Carter Center and Election Observation: An Obligations-Based Approach for Assessing Elections,” Election Law Journal 12, no. 1 (2013): 93, doi: 10.1089/elj.2013.1215.
[55] OSCE/ODIHR, Election Observation Handbook, Sixth Edition (Warsaw: OSCE Office of Democratic Institutions and Human Rights, 2010), 17-21, http://www.osce.org/odihr/elections/68439?download=true.
[56] Carter Center, Election Obligations and Standards: A Carter Center Assessment Manual (Atlanta: The Carter Center, 2014).
[57] SADC Electoral Commissions Forum and EISA, Principles for Election Management, Monitoring, and Observation in the SADC Region (Johannesburg: SADC ECF/EISA, 2003), 2.
[58] Ibid., 8.
[59] OAS, Methods for Election Observation: A Manual for OAS Electoral Observation Missions (Washington, DC: Organization of American States, 2007), 7.
[60] Ibid., 8.
[61] OSCE/ODIHR, Election Observation Handbook, 6th ed., 23-4.
[62] Carroll and Davis-Roberts, “The Carter Center and Election Observation: An Obligations-Based Approach for Assessing Elections,” 93.
[63] Ibid.
[64] OAS, Methods (2007), 9.
[65] Garber, Guidelines, 49.
[66] OSCE/ODIHR, Election Observation Handbook, 111.
[67] Ibid.
[68] OSCE/ODIHR, Handbook for Long-term Election Observers: Beyond Election Day Observation (Warsaw: OSCE/ODIHR, 2007), 20.
[69] OAS, Manual for OAS Electoral Observation Missions (Washington, DC: OAS, 2009), 15.
[70] “About,” GetELMO (The Carter Center), http://getelmo.org/about.
[71] Ian Schuler, “SMS as a Tool in Election Observation,” Innovations: Technology, Governance, Globalization 3, no. 2 (2008): 146, doi:101162/itgg.2008.3.2.143.
[72] Ibid., 148.
[73] OSCE/ODIHR, Guidelines for Reviewing a Legal Framework for Elections (Warsaw: OSCE/ODIHR, 2013), 2.
[74] OAS, Methodology for Media Observation during Elections: A Manual for OAS Electoral Observation Missions (Washington, D.C.: OAS, 2011), 2.
[75] Carter Center, The Carter Center Handbook on Observing Electronic Voting, Second Edition (Atlanta: The Carter Center, 2012), 3.
[76] Ben Goldsmith and Holly Ruthrauff, Implementing and Overseeing Electronic Voting and Counting Technologies (Washington, DC: IFES and NDI, 2013), 21.
[77] “E-learning Course for Short-Term Election Observers,” OSCE/ODIHR, accessed December 3, 2014, http://www.osce.org/odihr/92974.
[78] “Elearning,” EU/EODS, accessed August 1, 2014, http://www.eods.eu/elearning; “Video Tutorial on Election Day Procedures for May 25 Ukraine Election Observers,” YouTube video clip, 33:25, uploaded May 16, 2014, https://www.youtube.com/watch?v=G7_UBs_rG50&feature=youtu.be.
[79] U.N., Declaration of Principles, para. 6-7.
[80] Ibid., para. 19.
[81] European Commission, Handbook for European Union Election Observation, 108.
[82] OSCE/ODIHR, Election Observation Handbook: Sixth Edition, 37.
[83] Ibid., 42.
[84] European Commission, Handbook for European Union Election Observation, Second Edition (Brussels: European Commission, 2008), 116, http://eeas.europa.eu/eueom/pdf/handbook-eueom-en-2nd-edition_en.pdf.
[85] Ibid., 117.
[86] OAS, Manual (2009), 26.
[87] OSCE/ODIHR, Election Observation Handbook, 32.
[88] EISA, “EISA Election Observation Mission Models,” in e-mail to author, 2.
[89] “NDI’s Comprehensive Approach to International Election Observation,” in e-mail to author, 6.
[90] ODIHR, Election Observation Handbook, 30.
[91] Ibid., 31.
[92] Ibid., 31-2.
[93] Ibid., 32.
[94] EISA, “EISA Election Observation Mission Models,” in e-mail to author, 2.