Do you browse the Internet, use Internet banking applications or play games on a tablet? Then you are a software system user. Several companies are trying to compete for your attention to buy or use their software. To convince us, they need to supply a high quality product or we will simply download their competitor’s application.
Testing is essential to ensure quality. This process is very challenging, time consuming, expensive, and provides inconsistent results depending on circumstances such as who tests the product or how much time is allocated to it. Our research team (the Processing Engineering, Security and Testing [PEST] research group) is trying to solve these problems by making it easier and quicker to judge whether or not testing is being done correctly within an organisation. One area of focus for us is known as mutation testing.
In order to understand the technique, it helps to imagine passengers going through an airport security scan. Airports are equipped with security scanners, a doorway-like device that scans for forbidden items such as weapons or dangerous chemicals. Now imagine that you are Head of Security at an airport and have just bought a scanner. You naturally want to make sure it works properly. How would you do this? Probably by walking through the scanner carrying various hidden items. If the machine does not raise the alarm, you would be on a hotline to the supplier.
Software is tested in a similar way. Software systems are usually developed stepwise. Developers release a version of an app or program to customers, who start using it, whilst the developers continue to develop the next system version. These arrive on your device as updates.
The problem with updates is that they can break a system. To prevent this issue, software engineers develop automated test suites, which are computer programs that scan software and find hidden problems—like an airport scanner would. Instead of scanning passengers for forbidden items, a test suite scans a software system for faults. When a developer makes a change to a system, she tests it using the automated test suite and if no failures occur, then the system is deemed good enough to release to customers. Theoretically, this process works. In practice, test suites are not perfect and tend to miss errors.
To make better test suites our PEST research group has turned towards mutation testing. This technique tests the test suite. Going back to the airport scanner scenario, much like we asked a passenger to walk through the scanner multiple times, each time concealing a different forbidden item, with mutation testing we purposely inject a fault into a system and then check whether the test suite detects the fault. The faulty version of the system is called a mutant, hence the technique’s name. We do this thousands of times, injecting all sorts of faults which programmers are likely to make and each time we see whether the test suite is good enough to catch that fault. If not, we update the test suite to make it better.
Mutation testing takes time. A typical test suite takes ten minutes, a modest mutation test run of 5,000 mutants would take 50,000 minutes, or over a month of continuous analysis. A development team usually releases a system version every two weeks. Because it is so time consuming it has never been adopted by industry; we are trying to help them bring it on board.
“The idea behind our solution is deceivingly simple.”
The idea behind our solution is deceivingly simple. As already stated, software is developed incrementally over time. On a project’s first day, there is no system and no test suite. At the end of the day, there might be a few dozen lines of code to build the program. A mutation test run at this point would only take a few minutes. Possible mutants are few because there is so little to work on. The test suite would finish in two to five seconds. At the end of the second day, because of more code the mutation test should take longer. Our technique is cleverer because it does not need to analyse all the code every time but only sees the precise differences between the system on day one and two. Our system then sees the key parts of the system that need mutation testing and only analyses those.
When we ran our system it reduced testing time by 88% and 91%. Achieving this goal took us three years and the technique’s development was challenging. We needed to mathematically prove that checking the test suite at every stage of development was at least as good as testing it at the very end. We also needed a quick and accurate system to analyse which parts of a system are most vulnerable to change. Our solution was to determine the importance of a system part by how it communicates with another part and uses data—a good compromise. Our system is now ready for the industry and we have named it Incremental Mutation Testing because it analyses a test suite as it is being built.
We now want to prove that it can work in an industrial setting. We want to collaborate with industrial partners to run our system. We hope this will prove that our system can make companies more efficient and cost-effective to develop higher quality software. We want them to build better systems for everyone’s laptops, tablets, and phones.
The PEST Lab would like to thank Bit8 Ltd and MCST (through their funding of Project GOMTA) for their support with this work.