Whether developing a complete suite of functional software, delivering a new product code, or merely upgrading a line of code – the same rules should be applied. Each of the following rules should be considered when establishing a test plan:
  • Tests should be specified and recorded to ensure objectivity, consistency, reproducibility and impartiality.

  • Test environments should as far as possible mirror the environments the client uses.

  • Tests should be derived from user or product requirements.

  • All tests specified should indicate expected results.

  • Tests should be specified and performed by the most appropriately qualified personnel.

  • The tests should not be performed by the author of the software.

  • Test records should be annotated with pass/fail against the expected results and any deviations or side effects fully documented.

  • Where deviations or faults have been identified during testing, fault records/logs should be kept and any subsequent changes and/or corrective actions fully documented.
  • All changes to the software should be managed and controlled.
  • It should be possible to demonstrate that the software under test has been derived from a clearly defined set of source and data files.
  • Test data sets should be based on ‘typical’ (and where possible, actual) data for the software in normal use.
  • All tests should be repeated for each possible activity supported by the product or the underlying platform or used in operation.

  • The test plan should indicate what the release is delivering, the area(s) of functionality affected, a brief summary of risks identified, and the depth of testing selected to mitigate each risk.