Three instrumental means of minimizing the risks of technology are system verification, testing and maintenance. Every aspect of a computer system -- hardware, communications and software - should be verified and thoroughly tested before the system is used for an electoral event. After successful testing, systems will need regular maintenance to ensure they will perform effectively when they are needed.
The technology's level of importance will likely determine the degree of rigour applied to verifying, testing and maintaining the technology. For a system to be used for a crucial electoral function, such as an electronic voting system, the degree of rigour needed will be high.
System verification
For a highly important system such as an electronic voting system, it is appropriate to employ an independent testing authority to perform system verification tests. For less important systems, system verification could be conducted in-house.
System verification tests (otherwise known as qualification tests) could include:
- testing of hardware under conditions simulating expected real-life conditions
- testing of software to ensure that appropriate standards are followed and that the software performs its intended functions, including audits of code
- ensuring system documentation is adequate and complete
- ensuring data communications systems conform to appropriate standards and perform effectively
- verifying that systems are capable of performing under expected normal conditions and possible abnormal conditions
- ensuring appropriate security measures are in place and that they conform to appropriate standards
- ensuring that appropriate quality assurance measures are in place
System testing
System testing is usually more detailed and thorough than system verification. System testing is needed to ensure that every component of a system is operating as it should, and that the system is performing exactly in accordance with the specific local requirements.
For an important system such as an electronic voting system, a structured system testing program is a means to ensure that all aspects of a system are tested. Testing measures that could be followed include:
- developing a set of test criteria
- examining all non-standard code to ensure its logical correctness and to ensure that appropriate standards of design and construction are followed
- applying 'non-operating' tests to ensure that equipment can stand up to expected levels of physical handling
- applying functional tests to determine whether the test criteria have been met
- applying qualitative assessments to determine whether the test criteria have been met
- conducting tests in 'laboratory' conditions and conducting tests in a variety of 'real life' conditions
- conducting tests over an extended period of time, to ensure systems can perform consistantly
- conducting 'load tests', simulating as close as possible a variety of 'real life' conditions using or exceeding the amounts of data that could be expected in an actual situation
- verifying that 'what goes in' is 'what comes out', by entering known data and checking that the output agrees with the input
System maintenance
After systems have been verified, tested and implemented, they must continue to be maintained to ensure that they can continue to perform at the level demonstrated during the system testing stage. Maintenance routines will vary depending on the type and complexity of the technology. Many items will come with a maintenance schedule or program recommended by the manufacturer or supplier. Maintenance could also be provided by the manufacturer or supplier as part of the purchase agreement.
Ongoing monitoring or testing of systems may need to be systemitized to ensure that maintenance needs are identified and met when necessary. Where systems are in long-term use, a mechanism can be put in place to monitor feedback from users as another means to determine the need for maintenance and modification.
Where modifications to hardware, software and/or communications are made as a result of maintenance or upgrades, it may be necessary to instigate further rounds of system verification and testing to ensure that standards are still met by the modified system.
More detail
The following topics give more detail:
Reference: Performance and Test Standards for Punchcard, Marksense, and Direct Recording Electronic Voting Systems, [United States] Federal Election Commission, US Government Printing Office, Washington DC, January 1990