The V-model represents a software development process (also applicable to hardware development) which may be considered an extension of the waterfall model. Instead of moving down in a linear way, the process steps are bent upwards after the coding phase, to form the typical V shape.
The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing. The horizontal and vertical axes represent time or project completeness (left-to-right) and level of abstraction (coarsest-grain abstraction uppermost), respectively.
User Software Requirements
The next broad step is to define a set of “User Requirements”, which is a statement by the customer of what the system shall achieve in order to meet the need. These involve both functional and non-functional requirements. Further details are in the requirements article.
“Requirements” are then passed to developers, who produce a “System Specification”. This changes the focus from what the system shall achieve to how it will achieve it by defining it in computer terms, taking into account both functional and non-functional requirements.
Other developers produce a “System Design” from the “System Specification”. This takes the features required and maps them to various components, and defines the relationships between these components. The whole design should result in a detailed system design that will achieve what is required by the “System Specification”.
Each component then has a “Component Design”, which describes in detail exactly how it will perform its piece of processing.
Finally, each component is built, and then is ready for the test process.
The level of the test is the primary focus of a system and derives from the way a software system is designed and built up. Conventionally this is known as the “V-Model”, which maps the types of test to each stage of development.
Starting from the bottom the first test level is “Component Test”, sometimes called Unit Testing. It involves checking that each feature specified in the “Component Design” has been implemented in the component.
In theory, an independent tester should do this, but in practise, the developer usually does it, as they are the only people who understand how a component works. The problem with a component is that it performs only a small part of the functionality of a system, and it relies on co-operating with other parts of the system, which may not have been built yet. To overcome this, the developer either builds, or uses special software to trick the component into believing it is working in a fully functional system.
As the components are constructed and tested they are then linked together to check if they work with each other. It is a fact that two components that have passed all their tests, when connected to each other produce one new component full of faults. These tests can be done by specialists, or by the developers.
Interface Testing is not focussed on what the components are doing but on how they communicate with each other, as specified in the “System Design”. The “System Design” defines relationships between components, and this involves stating:
- What a component can expect from another component in terms of services.
- How these services will be asked for.
- How they will be given.
- How to handle non-standard conditions, i.e. errors.
Tests are constructed to deal with each of these.
The tests are organised to check all the interfaces, until all the components have been built and interfaced to each other producing the whole system.
Once the entire system has been built then it has to be tested against the “System Specification” to check if it delivers the features required. It is still developer focussed, although specialist developers known as systems testers are normally employed to do it.
In essence, the System Test is not about checking the individual parts of the design, but about checking the system as a whole. In effect, it is one giant component.
System testing can involve a number of specialist types of test to see if all the functional and non-functional requirements have been met. In addition to functional requirements these may include the following types of testing for the non-functional requirements:
- Performance – Are the performance criteria met?
- Volume – Can large volumes of information be handled?
- Stress – Can peak volumes of information be handled?
- Documentation – Is the documentation usable for the system?
- Robustness – Does the system remain stable under adverse circumstances?
There are many others, the needs for which are dictated by how the system is supposed to perform.
Acceptance Testing checks the system against the “User Requirements”. It is similar to systems testing in that the whole system is checked but the important difference is the change in focus:
- Systems Testing checks that the system that was specified has been delivered.
- Acceptance Testing checks that the system delivers what was requested.
The customer, and not the developer should always do acceptance testing. The customer knows what is required from the system to achieve value in the business and is the only person qualified to make that judgement. The forms of the tests may follow those in system testing, but at all times they are informed by the business needs.