TESTING

a. Why test?

  • Rework or bug fixing is an expensive activity that can be avoided by timely and appropriate testing. Rework impacts current work, schedules, and future delivery dates. It frustrates customers and staff alike, leading to loss of confidence and trust all round. Accurate specification of testing requirements for each job means that any additional work requested after delivery is not ‘rework’ and is therefore chargeable to the customer.

b. How to test

  • The amount and depth of testing should be in direct relation to the risk.. The risk is the likelihood and/or impact on the customer should the software not be delivered in a timely manner and to requirements or impact adversely on existing functionality.
  • Each test can either be ‘positive’ or ‘negative’, i.e. test to confirm the processing has correctly changed the data or correctly ignored the data.
  • The depth of the testing will depend both on the complexity of the change and the importance of the area of functionality to the customer. From identifying what the system should do, the criteria for user constraints should be included. The test plan must include tests that reduce or eliminate each identified risk. Each test applied should meet the 'S.M.A.R.T.' criteria:

    Specific: what you are testing for must be clearly defined. The anticipated result must be unambiguous.

    Measurable: the anticipated result must be capable of comparison with the actual result to define pass or fail.

    Achievable: do not repeat the same test unless you are testing against new criteria – large test matrices do not necessarily achieve more than shorter, more specific, ones – they just take longer.

    Relevant: anticipated results must be those expected from the system. Do not test for what is not there.

    Timely: the test matrix should naturally follow the process; anticipated results should be ordered in the same way as the system would chronologically produce them.

> Top <