We begin the article by presenting a brief historical perspective on software development. We present the view of several software practitioners, who express different views on how software should be developed. Two major schools of thought in this context are presented: Software engineering and software craftsmanship. We will then see how agile and iterative development learns from the mistake of the flaw of the traditional Waterfall model in software development. We incorporate some of the best practices based on our experience and beliefs, inspired by a few famous agile and iterative development methods, in an attempt to formulate a well-rounded method. The article is concluded by presenting some interesting yet controversial views in agile and iterative development, such as the right team size, good estimation technique, and the debate on high tech or high touch approach. We also give some advice for a successful adoption and implementation of agile and iterative development.
Since the advent of the first digital computer in the early 1940s , software applications have been evolving steadily, with continuously improved technologies and practices to improve the productivity of software developers and the quality of software applications. The traditional prescribed way of developing software is through a series of software engineering processes . Pete McBreen  has proposed a craft model that focuses on the people involve in software development. He realized that programming often requires both scientific (thinking of logical propositions) and artistic (formulate creatively various logical propositions) elements that must be nurtured and reflected within individual developers in order to deliver software with higher quality. His model was highly supported by Grady Booch , who believe that good people is equally important as good process.
Agile and Iterative Development Methods
Agile and iterative development is meant to address the shortcomings of traditional Waterfall model in software development [4, 5]. It suggested that software development is more appropriately regarded as new product development or an inventive project, with a high degree of novelty, creativity, and change and no previous identical cases to derive estimates or schedules. Thus an empirical approach that welcomes change is needed. Agile development is based on the Agile Manifesto (Listing 1) published by the Agile Alliance [8, 9], which focuses on simplicity, lightness, and communication to maintain rapid and flexible response to change by preparing light and flexible plans . Iterative development breaks the overall development lifecycle into several iterations in sequence. Each iteration has a fixed iteration end date that is not allowed to change. The team is focus on producing the iteration release at the end of the iteration, which is a partially complete system that is stable, integrated and tested. The feedback gathered from this release will be used for planning upcoming iterations, until the final iteration release, where the complete product is released to the market or customers. (As illustrated in Figure 1.)
We proposed our method in agile and iterative development in the following sections. The three most influential agile methods to us are Scrum [10, 11], Extreme Programming (XP) , and Unified Process (UP) [19, 20]. They have provided us a well-rounded perspective on agile and iterative development, in particularly, Scrum’s project management style for self-directed team [12, 13], XP’s structure to ensure best engineering practices are applied , and UP’s architecture-centric approach to mitigate technical risks.
Mixing the Best Methods
There is a case to be made for mixing various agile and iterative methodologies or components to find the right balance between various development methods. We present in the following sections our vision for such a software development process.
The Workflow Lifecycle
The main purpose of the conceptualization phase is to establish a common vision. The team and other stakeholders of the project need to agree on the scope, vision, and priorities. A few requirements workshop are conducted to capture 10% of the significant requirements in detail. These are preferably features that deliver high business values, and span across as much architecturally influential elements as possible. Story cards are used to record these features as user stories (see Section 2.2) along with rough estimation of the development effort required. Use case model and supplementary specifications can be created to substantiate the requirement details if an onsite customer is not going to be available. The key risks of the project are identified and the release date is determined. Ideally, this phase should be short, such as a few days to a week. If it takes longer, it is usually a sign of excessive up-front specification or planning that should be avoided.
In the exploration phase, the primary objective is to implement the high risk high value features chosen in the conceptualization stage, while finalizing the requirements and features list. It is desirable to mitigate the major architectural risks in this stage by means of research, discovery, and creativity. However, it is important to understand that this stage is not only concerned with research, design modeling, and documentation, but includes programming work. While the main goal of this phase is to deliver an evolutionary prototype with production quality components that serve as a solid foundation for ongoing development, the creation of a small amount of throwaway prototypes are acceptable to mitigate specific risks such as design-requirements tradeoffs, component feasibility study, or demonstrations to investors, customers, and end-users. The software architecture document, which is created along with programming and testing, evolves iteration by iteration, summarizing the big ideas and motivation in the architecture. In addition to development, there will be a series of short requirements workshops (one per iteration) to refine most of the requirements based on the feedback from the growing system, and the estimation on story cards is further improved in light of the team experience on the development tasks.
By the end of this phase, the user stories to be implemented for the upcoming release are selected based on their estimates and time available towards the release date. The software architecture document is finalized, summarizing the stabilized architecture, supported by the significant programming work to build and prove it. The software object model can also be documented quickly by reverse engineered from code. This document is meant to be a short and concise learning aid that allows the team to form mental images of the whole system, and use as a guide for the team when making assumptions and decisions in creating detailed design, as well as instill common understanding.
During production phase, the customer (or product manager) chooses the user stories to implement at the beginning of each iteration. These user stories are selected based on the ones with the most business value, while ensuring that they can be completed in an iteration. The developers then break the user stories into many short, estimated programming tasks. The total estimated task-level effort may lead to readjustment of the chosen user stories. It is a mistake to create, at the start of the first iteration, a plan that lays out exactly how many iterations, and what will occur in each. The team only plans for the next upcoming iteration, and then planning adapts iteration by iteration based on current feedback. The iteration planning is typically a day’s work, or at most two.
Once this is done, the development works commence by implementing the selected user stories prioritized by highest business value first. The developers communicate with the onsite customer whenever possible to get accurate details about a user story, otherwise refering to the Use Case model and supplementary specification prepared. The team will come out with the simplest possible and most straightforward design that works while complying with the macro architecture described in the software architecture document. At the end of an iteration, most (if not, all) user stories planned initially are implemented, integrated and tested. An internal release of the system is produced for demo (actual product demo and not PowerPoint presentation) in a review meeting, where the team, customer, and other project stakeholders attend. The team articulates the system functions, design, strengths, weaknesses, effort of the team, and future trouble spots. Feedback and brainstorming on future directions are discussed and noted, but no commitments are made during the meeting until the next iteration planning. The series of iterations will ultimately work towards the release date to produce a fully working and tested system ready for release.
After the release, post-production works are carried out, such as deployment, training, marketing and sales. Documentation and manual writing are done incrementally in parallel along with the development effort and finalized when near the release date, when actual printing works begin. Maintenance phase involve activities like enhancements and bug fixes that can be conducted by following the similar workflow lifecycle to produce incremental releases and bugs patches. Figure 2 summarizes the entire workflow lifecycle.
The Core Practises
The workflow process described in Section 2.1 sets the direction for the team through the course of development by telling them the main purpose of each stage, the typical activities and recommended duration. The following outlines the core practices adopted from various agile methods that are carried out by the team in different stages throughout the development process. These are best practices that the team does on a minute-by-minute basis. We are attempting to have a development process that is agile and has a clear direction, knowing what are the best things we need to do to achieve these goals, while making some practices a routine in the process. We need to ensure that this will not only result in productivity increase, but reduce the requirement and technical risk of software, as well as nurturing a satisfying and sustainable team. Figure 3 provides an overall picture of how these practices fit within the workflow lifecycle.
Requirement Workshop is a meeting between project manager, customer, and other stakeholders in the early stages of the development lifecycle to identify the vision, high level objectives, and business case, as well as agree on the scope, priorities, and release date. Features and requirements are captured as user stories, which are customer-visible functionality or scenarios in the software  written on story cards (A5 or A6 sized index cards) in brief, substantiated by use case models and supplementary specification when necessary . Release Planning is conducted to define the scope of user stories, decide what to do and what to defer, in order to provide the best possible release by the agreed date. Time and effort required to implement are estimated for each user story in terms of ideal engineering hours [29, 30]. Estimations can be improved through experience gained during exploration phase, experimenting through spike solutions , and splitting large user stories. Then, the customer picks the ones with the most business value, and their estimated time and effort add up to the release date. Just before an iteration starts, Iteration Planning allows the customer to choose the user stories to be implemented during the iteration, while the team brainstorms engineering tasks (on a whiteboard or cards) in order to fulfill the stories. The developers then volunteer to sign up a set of tasks and estimate them. Every task should be estimated in half-day to two-day range, otherwise they are refactored.
Analysis and Design
The developers use the simplest possible design that gets the job done, bearing in mind that the requirements will change tomorrow, so only design what’s needed to meet today’s requirements, and avoid creating generalized “just in case” components. To foster common understanding and eliminate the fear of not knowing what to do, hold a quick design session where the developers get together and spend a few minutes up to half day sketching out the design. The usage of high touch low tech methods is encouraged during the design session, such as using CRC design with a few cards [21, 22, 23], or sketch some UML on the whiteboard, flipchart, or a sheet of paper. When arguing over design alternatives, pick the simplest one that could possibly work, or try a few ones to find out. To ensure simple design that contains minimal, simple, and comprehensible code at all times, continuous design improvement through refactoring is crucial and should be undertaken as part of daily programming habit .
Implementation and Testing
All production code is created by two developers at one computer, where both engaged actively via open communication, to keep each other on task and motivated. While one of the developers is coding the immediate programming task, the observer is doing real-time code review. The developers all need to agreed on a coding standard, and most importantly, all must use and enforce it. This ensures that the code communicates as clearly as possible and supports a shared responsibility for quality by everyone. Vic Hartog had written very comprehensive coding standards for the C# programming language . All developers are responsible for the whole system, and any source code may be changed by any developer at anytime. If a developer identifies a problem or discovers a chance to improve a certain portion of the system, it is the developer’s responsibility to fix or enhance it by pairing with an experienced developer, or at least address it during the next standup meeting.
It is mandatory to write unit tests for all functions, methods, and classes written by the developers. In fact, the unit tests must be created prior to the actual coding, and they are released into the code repository along with the code they test. Having unit tests available prior to coding helps the developers to be more objective and code just enough to meet the original intention. Aside from that, unit tests also ensure that any new modifications do not break the functionality of the existing code. There are several unit test frameworks  that can be used to simplify unit tests creation and automation. Aside from unit testing, Acceptance Testing and Customer Tests are written from the user’s perspective by the customer to test every features. The testers implement them in an automated way, usually via comparing the program produced results with the predefined results created by the customer. A bugs database is needed for cases when manual testing by the customer and tester is necessary, to keep track of test results and defects from iteration to iteration.
All checked-in code in the repository is continuously re-integrated and tested frequently on a build machine, in an automated 24/7 process loop of compiling, running all unit tests, and all or most acceptance tests. The developers were notified by email if there are problems during the build and test process. Ongoing effort is put in to keep build time low (10 minutes ideally) to maintain the true purpose of continuous integration.
A Daily Standup Meeting is conducted on each workday at the same time and place, to hold a 15 to 20 minutes meeting with the team members standing in a circle, focusing on the same special questions that are answered by each member: (1) What have you done since the last standup meeting? (2) What will you do between now and the next standup meeting? (3) What is getting in the way (blocks) of meeting the iteration goals? This practice provides a frequent measuring and adaptive response mechanism to update tasks and remove any impediments. Blocks reported at the daily standup meeting are ideally decided immediately, or within 1 hour. The value of “bad decisions are better than no decisions” is promoted. Blocks reported at the daily standup meeting are ideally removed before the next meeting. The project manager should create a Team Firewall to ensure the team is not interrupted by work requests from external parties, and if they occur, removes them and deals with all political and external management issues. The whole team needs to establish a Common Vocabulary, or “System of Names”, from the language of the problem domain that everyone can use. This will make communication between the developers and customers easier.
Performance Measurement and Tracking
Daily Tracking is about tracking progress in terms of the actual number of programming hours spent per day by every developer on their tasks or user stories. By tracking real effort expended we can direct help and provide resources when a story is over or nearing estimate, in order to complete the task as quickly as possible. A big picture of the team progress can be illustrated by plotting a line chart with the X-axis represents days in the iteration, while the Y-axis is the effort remaining by subtracting the total programming task hours spent from the estimates. To measure how much work the team can get it done in one iteration, Project Velocity is measured by summing up the actual programming effort spent on all completed user stories, and total up with the completed tasks for any unfinished user stories for the iteration. Based on the knowledge in past iterations, the upcoming user stories are re-estimated, and the customer is allowed to choose the number of user stories equating to the project velocity measured in the previous iteration. The project velocity is expected to go up and down, but if the change is too dramatic after a few iterations, a release planning is needed to revise the scope and release date. The core idea is to track the total amount of work done in team effort during each iteration to keep the development moving at a steady predictable pace.
In order to continuously verifying the software quality, Quality Tracking is done by measuring unit test scores over time, which should always be 100 percent pass. The official customer measurement of quality is the acceptance tests that is determined at the end of every iteration or, if possible, every day. The number of acceptance tests gives a good measurement of the testing scope, and the number of successful tests tells how well the team is doing. A bugs tracking system is important to trace the bugs when found and track them until they are fixed. The tracked bugs might form certain patterns allowing analysis and assumptions to predict future occurrences.
To deal with changes, Iteration Demo and Review is conducted at the end of each iteration, to demonstrate the current iteration release to the team, customer, and other stakeholders. This is actual product demonstration and not some PowerPoint Presentations. The goals include informing stakeholders of the system functions, design, strengths, weaknesses, effort of the team, and future trouble spots. Feedback and brainstorming on future direction is discussed and noted, but no commitments are made during the meeting until the next iteration planning.
An Informative Workspace is recommended with easily accessible information on current project status and priorities, including individual commitments by the whole team. Work trend and velocity are made visible as well. Various charts reporting work quality and progress can be easily evidenced in the room. Project related information that is constantly evolving and requires ongoing improvements by team members can be written and published using Wiki Web technology. Wiki allows people to edit Web pages using only their browser, as well as creating new pages, and hyperlinks between Wiki pages by a set of special WikiWords. Comparison of various Wiki implementations can be found here [26, 27, 28].
We managed to convince the management to apply our proposed method in one of our new product development. This product is a knowledge based visualization software application, which is fairly novel, and we can’t really find any similar applications in the market that we can model from directly. Thus, we think it is perfectly suitable for agile and iterative development.
Before commencing the development, it is crucial that the build server for continuous integration is installed and running. Bugs tracking system should be in place about the same time as well to ensure the smooth flow of the rapid reviewing and feedback cycle throughout the iterations. It is also important to have the customer works together with the developers during the conceptualization stage to capture the initial requirements as user stories. The customer will attempt to identify features with high business value, while the developers will supply inputs from the technical perspective, performing rough preliminary assessments on the technical viability and evaluating their contributions towards the overall architecture.
During the iteration planning, we had learnt that it is extremely useful to keep the customer available throughout the entire session, even when the developers are brainstorming the engineering tasks that seems unrelated to the customer and does not actually require customer presence. But as the developers sometimes require to re-estimate a user story or re-adjust the scope in light of better understanding of its complexity and time availability, it is important to ensure that the customer have common understanding onto this before starting the development. Throughout the iteration as the user stories are being developed by the developers, as we mentioned in the early section, ideally the customer should remain onsite with the developers to clarify any queries that might arise during the development. Although it does help if certain complex user stories are elaborated using use case model and supplementary specifications when the customer is unable to dedicate majority of his time with the developers, we found that the better solution is to nominate a customer proxy (usually the product manager), who represents the actual customer, and frequently communicate with the actual customer to get great understanding (both breadth and depth) on the software being developed. This also works well for software that has a lot of customers, for instance, end user software.
We can’t emphasize more on the importance of automated unit tests as a safety net to any code changes that could possibly break existing code. Aside from having unit tests for every functions, methods, and classes, it is imperative to train the developers on writing effective and comprehensive unit tests in order to elicit the true value. For instance, in addition to positive unit tests that test a function is working correctly, it is essential to include negative unit tests that try to break a function, for example, testing for exceptions or errors being raised by certain functions or methods. It is very easy to write superficial unit tests (or none at all) especially during crunch hours, which should be avoided altogether. Developers should be constantly reminded that implementing unit tests is equally important as user stories and features, and no user story is considered complete without proper unit tests for its implementation. Frequent design improvement through refactoring should also be another important part of the standard programming routine to ensure the effectiveness of agile methods. As we are avoiding speculative big up front design before coding (for reasons mentioned in earlier sections), the developers need to constantly look for ways to improve the current software design in light of experience and better understanding on the software being developed, so that the design always keep up with the present requirements. Without frequent refactoring, the software will risk ending up with shoddy design with a lot patchwork that will significantly reduce software maintainability issues in the long run.
The bottom line to create an environment that is conducive for software research and development is to recognize the human aspect in the process. In our company, we promote verbal communication more than documentations among team members. We attempt to understand and sensitive to individual needs in order to make them the most productive on what they are doing. Pair programming greatly synergize the team, but different pairs might have differently compatibility, thus it is important to switch pair from time to time. We also found that it is important to instill team focus on the current iteration of work in order to produce the iteration release on time. Overtime should be avoided, communicating to adjust scope if necessary. Timely iteration release has significant psychological effect to the team. It will give the team a great sense of completion at the end of each iteration, and ensure every developer to come fresh and clean at the beginning of each iteration.
We have given a broad perspective on agile and iterative development methods and proposed our preferred way, which is carefully crafted by choosing the best process practices from a thorough understanding of existing methods. Most agile development practice recommends four to ten people in a team. Larger teams should be split into a few sub-teams . For estimating and tracking development effort, there are some who suggest using abstract value, such as Story Points or Gummy Bears. Ideal Engineering Days is another way of estimating by assuming the number of uninterrupted work days required. Estimation is “an art at best” and the key is to measure performance and harness this feedback . There are a number of software tools available for aiding the planning and management of agile and iterative development. Some XP experts such as Ron Jeffries are strongly against using them, as he believed it will deter human interaction and participation . Whilst we definitely agree on this, we still perceive values in those tools for helping the team in better organizing and archiving project related information, i.e. imagine if the story cards had coffee spilled on them, or bug reports were misplaced, etc.
For a successful implementation of our method, it is common to adopt it with pilot projects and a method coach to drive the learning process. The pilot projects should be big enough to be significant, but not too big with too much risk to fail, which will definitely banish the adoption drive. It is necessary for the developers, managers, and customers to have an honest perception of the method, and be cautious not to oversell it but instead propose the pilot project as an experiment whose results will be justified against a set of quantifiable goals to guide further steps. It is important to bear in mind that improvement might not be quickly apparent as it takes time and skill over a series of projects.
- “History of Computing Hardware”, Wikipedia, Wikimedia Foundation Inc., 2006, http://en.wikipedia.org/wiki/History_of_computing_hardware#1940s:_first_electrical_digital_computers
- I. Sommerville, Software Engineering (6th Edition), Addison Wesley, 2000.
- P. McBreen, “Finding a Better Metaphor than Software Engineering”, Software Craftsmanship: The New Imperative, Addison Wesley, pp. 25-33, 2002.
- W. Royce, “Managing the Development of Large Software Systems”, Proceedings of IEEE WESTCON, IEEE Computer Society Press, pp. 328-338, 1970.
- B. Boehm, “Anchoring the Software Process”, IEEE Software, IEEE Computer Society Press, pp. 73-82, 1996.
- C. Larman, “Plan the Work, Work the Plan”, Agile & Iterative Development: A Manager’s Guide, Pearson Education, pp. 62, 2004.
- G. Booch, Managing the Object-Oriented Project, Addison Wesley, 1996.
- Manifesto for Agile Software Development, 2001, http://www.agilemanifesto.org/
- Agile Alliance, http://www.agilealliance.org/
- K. Schwaber, Agile Project Management with Scrum, Microsoft Press, 2004.
- K. Schwaber & M. Beedle, Agile Software Development with Scrum, Prentice Hall, 2001.
- K. Fisher, Leading Self-Directed Teams, McGraw-Hill, 1999.
- S. Berkun, “How to Make Things Happen”, The Art of Project Management, O’Reilly Media, 2005.
- R. Jeffries, A. Anderson & C. Hendrickson, “User Stories”, Extreme Programming Installed, Addison Wesley, pp. 28-29, 2001.
- A. Cockburn, Writing Effective Use Cases, Addison Wesley Professional, 2000.
- K. Beck, Extreme Programming Explained: Embrace Change, Pearson Education, 2005.
- W. Humphrey, “Why Don’t They Practice What We Preach”, Annals of Software Engineering, Springer, pp. 201-222, 1998.
- C. Larman, “Programming as if People Mattered”, Agile & Iterative Development: A Manager’s Guide, Pearson Education, pp. 30-31, 2004.
- P. Kroll & P. Kruchten, The Rational Unified Process Made Easy, Addison Wesley Professional, 2003.
- P. Kruchten, The Rational Unified Process: An Introduction, Addison Wesley, 2003.
- D. Wells, “CRC Cards”, The Rules and Practices of Extreme Programming, 1999, http://www.extremeprogramming.org/rules/crccards.html
- “List of Unit Test Frameworks”, Wikipedia, Wikimedia Foundation Inc., http://en.wikipedia.org/wiki/List_of_unit_testing_frameworks
- K. Beck & W. Cunningham, “A Laboratory for Teaching Object-Oriented Thinking”, Proceedings on Object-oriented Programming Systems, Languages and Applications, ACM Press, pp. 1-6, 1989.
- D. Rubin, “Introduction to CRC Cards”, Methodologies and Practices – White Paper, SoftStar Research Inc., 1998.
- M. Fowler, K. Beck, J. Brant, W. Opdyke & D. Roberts, Refactoring: Improving the Design of Existing Code, Addison Wesley Professional, 1999.
- “Comparison of Wiki Farms”, Wikipedia, Wikimedia Foundation Inc., http://en.wikipedia.org/wiki/List_of_wiki_farms
- “Wiki Feature Comparison”, WikiMatrix, http://www.wikimatrix.org/
- “Wiki Farms”, http://c2.com/cgi/wiki?WikiFarms
- K. Beck & M. Fowler, “Project Scope and Estimation”, Planning Extreme Programming, Addison Wesley Professional, 2000.
- R. Jeffries, A. Anderson & C. Hendrickson, “How to Estimate Anything”, Extreme Programming Installed, Addison Wesley, pp. 185-188, 2001.
- R. Jeffries, A. Anderson & C. Hendrickson, “Spike Solution”, Extreme Programming Installed, Addison Wesley, pp. 41-44, 2001.
- V. Hartog & D. Doomen, “Coding Standard: C#”, Philips Medical Systems, Philips Electronics NV, 2003.
- C. Larman, “Multiteam or Multisite Early Development”, Agile & Iterative Development: A Manager’s Guide, Pearson Education, pp. 248-249, 2004.
- M. Hohman, “Estimating in Actual Time”, Proceedings of the Agile Development Conference, IEEE Computer Society, pp. 132-138, 2005.
- R. Jeffries, “Some Thought on Planning Tools”, XProgramming.com, 2004, http://www.xprogramming.com/blog/Page.aspx?display=PlanningSoftware