Seven Principles of ISEB Software Testing
September 13, 2017
ISEB Software Testing Foundation training courses introduce students to the fundamentals of software testing, including the reasons for carrying out tests, basic test processes and the general principles that underpin testing good practice. Knowing these principles, and understanding how they affect the software tester, is crucial to passing the ISEB Software Testing Foundation exam.
1. Testing shows the presence of bugs
That is, testing can show that problems exist, but not that problems do not exist.
This principle lies at the core of ISEB Software Testing guidance. An astute test analyst understands that even if a test does not reveal any faults, the subject of the test is not necessarily error-free.
The key objective of carrying out a test is to identify defects. Working under the assumption that every product will contain defects of some kind, a test that reveals errors is generally better than one that does not. All tests should therefore be designed to reveal as many errors as possible.
2. Exhaustive testing is impossible
Exhaustive testing feeds all possible data combinations into the software, in order to ensure that no untested situation can arise once the software has been released. Except on extremely simple applications, the number of possible data combinations is forbiddingly high; it is more effective and efficient for testers to focus on risks and priorities, so that the tests are targeted to the testing needs.
3. Early testing
A product (including documents, such as the product specification) can be tested as soon as it has been created. The ISEB software testing guidance recommends testing a product as soon as possible, in order fix errors as quickly as possible. Studies have shown that errors identified late in the development process generally cost more to resolve.
For example: an error in a product specification may be fairly straightforward to fix. However, if that error is transferred to the software coding, then fixing the mistake could become costly and time-consuming.
4. Defect clustering
Studies suggest that problems in an item of software tend to cluster around a limited set of modules or areas. Once these areas have been identified, efficient test managers are able to focus testing on the sensitive areas, while still searching for errors in the remaining software modules.
5. The ‘pesticide’ paradox
Like over-used pesticide, a set of tests that is used repeatedly on the same software product will decrease in efficacy. Using a variety of tests and techniques will expose a range of defects across different areas of the product.
6. Testing is context dependent
The same tests should not be applied across the board. Different software products have varying requirements, functions and purposes. A test designed to be performed on a website, for example, may be less effective when applied to an intranet application. A test designed for a credit card payment form may be unnecessarily rigorous if performed on a discussion forum.
In general, the higher the probability and impact of damage caused by failed software, the greater the investment in performing software tests.
7. Absence of errors fallacy
Declaring that a test has unearthed no errors is not the same as declaring the software “error-free”. In order to ensure that adequate software testing procedures are carries out in every situation, testers should assume that all software contains some (albeit concealed) faults.
Software testing good practice is an essential part of ensuring the quality of IT products. While software testing cannot guarantee that the software contains no errors, it does contribute significantly to the identification and reduction of faults, improving the likelihood that the software implementation will succeed.