Incremental models
Contents |
Origins
Problems with the waterfall model have been recognized for a long time. The emphasis in the waterfall model is on documents and writing. Fred Brooks (The Mythical Man Month, Anniversary Edition, 1995, page 200): I still remember the jolt I felt in 1958 when I first heard a friend talk about building a program, as opposed to writing one. The “building” metaphor led to many new ideas: planning; specification as blueprint; components; assembly; scaffolding; etc. But the idea that planning preceded construction remained. In 1971, Harlan Mills (IBM) proposed that we should grow software rather than build it. We begin by producing a very simple system that runs but has minimal functionality and then add to it and let it grow. Ideally, the software grows like a flower or a tree; occasionally, however, it may spread like weeds. The fancy name for growing software is incremental development.
Functioning
There is a wide array of software development processes tagged as incremental models. They all take their origins in the above mentioned deficiency of the Waterfall model, i.e. its inability to cope with change. In incremental models, software is constructed step by step, and at the end of each step the result is validated. The products is designed, implemented, integrated and tested as a series of incremental builds, where a build consists of code pieces from various modules interacting together to provide a specific functional capability and testable as a whole. The requirements, specifications and architectural design must be completed before the implementation for the various builds commences.
Coping with change
One of the main advantages of the incremental models is their ability to cope with change during the development of the system. The waterfall model relies on careful review of documents to avoid errors. Once a phase has been completed, there is limited provision for stepping back. It is difficult to verify documents precisely and this is, again, a weakness of the waterfall model. As an example, consider an error in the requirements. With the waterfall model, the error may not be noticed until acceptance testing, when it is probably too late to correct it. The error may be a requirements error, but it is very tedious to verify requirements statements before they become operational, especially when buried in hundreds of other requirements statements. The real problem of finding a requirements error at then end of the production phase is that a change in one requirement very often induces a 'ripple effect' of changes in other requirements and to other following artifacts that are based on it (e.g design, code, tests, etc). Thus, uncovering such a mistake towards then end of the production is likely to require many other changes. The uncovering of many of such mistakes at the end of the production leads to a dramatic situation that may put into jeopardy the whole project. On the other hand, in the incremental model, there is a good chance that a requirements error will be recognized as soon as the corresponding software is incorporated into the system. As software is developed then validated in short time boxes, errors uncovered are likely to have lesser magnitude in the ripple effect of changes that they induce.
Distribution of feedback
One of the main reasons why the Waterfall model is not appropriate in most cases is the accumulation of too much unstable information at all stages. For example, a complete list of 500 requirements is extremely likely to change, no matter how confident is the client on the quality of these requirements at this point. Inevitably, the further design and implementation phases will uncover flaws in these requirements, raising the need for the update and re-verification of the requirements as a whole each time major flaws are uncovered. A better approach is thus to limit the accumulation of unstable information by concentrating on the definition, implementation and validation of only a subset of the requirements at a time. Such an approach has the benefit of distributing the feedback on the quality of the accumulated information. In the Waterfall model, most of the relevant feedback is received towards the end of the development cycle, where the programming and testing is concentrated. By distributing the development and validation efforts throughout the development cycle, incremental models also achieve distribution of feedback, thus increasing the stability of the accumulated artifacts.
Advantages and Drawbacks
- Advantages
- Delivers an operational quality product at each stage, but one that satisfies only a subset of the clients requirements.
- A relative small number of programmers/developers may be used.
- From the delivery of the first build, the client is able to perform useful work (portions of the complete product might be available to customers in weeks instead of waiting for the final product, compared waterfall, rapid prototyping model, which deliver only then the complete product is finished).
- Reduces the traumatic effect of imposing a completely new product on the client organization by providing a gradual introduction.
- There is a working system at all times.
- Clients can see the system and provide feedback.
- Progress is visible, rather than being buried in documents.
- Most importantly, it breaks down the problem into sub-problems, dealing with reduced complexity, and reduced the ripple effect of changes by reducing the scope to only a part of the problem at a time.
- Distributes feedback throughout the whole development cycle, leading to more stable artifacts.
- Disadvantages
- Each additional build has somehow to be incorporated into the existing structure without degrading the quality of what has been build to date.
- Addition of succeeding builds must be easy and straightforward.
- The more the succeeding builds are the source of enexpected problems, the more the existing structure has to be reorganized, leading to inefficiency and degrading internal quality and degrading maintainability.
- The incremental models can easily degenerate into the build and fix approach.
- Design errors become part of the system and are hard to remove.
- Clients see possibilities and want to change requirements.
Dangers and Solutions
Planification
The main danger of using incremental models is to proceed too much in an ad-hoc manner. Determining a plan of action is of prime importance to insure the success of use of incremental models. The early stages of development must include a preliminary analysis phase that determines the scope of the project, tries to determine the highest risks in the project, define a more or less complete list of important features and constraints, in order to establish a build plan, i.e. a plan determining the nature of each build, and in what order the features are implemented. Such a plan should be made in order to foresee upcoming issues in future builds, and develop the current build in light of these issues and make their eventual integration easier.
Structural quality control
Incremental models, like the build-and-fix model, is likely to result to the gradual degrading of internal structural quality of the software. In order to minimize the potentially harmful effect of this on the project, certain quality control mechanisms have to be implemented, such as refactoring. Refactoring is about increasing the quality of the internal structure of the software without affecting its external behavior. The net effect of a refactoring operation is to make the software more easy to understand and change, thus easing the implementation of the future builds. How often a refactoring operation needs to be done depends on the current quality degradation of the software. Note that planification also has a similar effect by enabling to foresee further necessary changes and developing more flexible solutions in light of the knowledge of what needs to be done in the future.
Architectural baseline
One of the reasons for the degradation of internal structural quality of the system through increments is often associated with a lack of architectural design. Processes like the unified process advocate the early definition of the architecture of the system, or early identification and design of the system core. Such a practice has the effect of easing the grafting of new parts on the system throughout increments, and minimizing the magnitude of changes to be applied upon grafting of new parts of the builds. This is also related to the two preceding items:
- Acheiving an architectural design is advisable when writing a project plan, and the architecture can also help building a clear plan that developers can relate to.
- Acheiving an architectural design will help control the structural quality of the system by providing a framework for the entire application helping the developers to see the big picture of the system, as they are working on individual parts during de development of the different builds. Also, refactoring operations normally have a result of defining or refining the architecture of the system.
Parallel builds: risky
Various builds are performed simultaneously by different teams. After the design phase of build one is started, the specification team is already starting with the specification of the second build. The risk is that the resulting builds will not fit together. Each build inevitably has some intersection with other builds. Good coordination and communication is important to make sure that teams that have intersecting builds are agreeing on the nature and implementation of their common intersection. The more builds are done concurrently, the more this problem is growing exponentially. Also, larger number of software developers is necessary compared to linear incremental development.