Testing in traditional projects is different than testing in agile projects following the well-known agile practices. The main difference would be when the testing activities start. In traditional projects, testing begins after the development of the software as a final activity. Even though some companies are following this approach, this can lead to issues found late in the development which is expensive to fix, testers didn’t get the chance to “play” with the system at the early stage and the cost to fix would be high and would affect the overall deadline for the product release.
In agile projects, testing starts at the early stages, at the requirements gathering stage. Testing is done in parallel with the development where testers are discussing requirements, defining the tools for testing, identify the test strategy to be followed, create test cases and automated scripts, and evaluate test results. All of these activities can overlap with other activities performed by other members of the team and because of that, bugs are identified and fixed early and a successful product increment is shipped to the customer. This is what it’s called agile testing.
Since testing is performed at various stages, common agile practices performed by many agile teams that follow the popular agile approaches in which testers are an integral part of, are:
- User Story Creation
- Release Planning
- Sprint Planning
- Continuous Integration
User Story Creation
A user story is an agile practice. It’s a form of a requirement specification written in a collaborative effort by all the members of the team. (testers, developers, business analysts, etc.). When new functionality is introduced, then a user story for the functionality is defined, discussed, and analyzed. To be sure that the user story is correctly explained and understood by all the members of the team several review meetings are arranged to discuss the validity of the story in terms of how well the feature is defined, what are the preconditions, teams skills, external factors that can affect the story as well as the resources available to work on that story.
Testers are a valuable part of creating the user story. They can ask questions from their aspect (testing mindset), clarify requirements, propose ways for testing and automating, prioritizing what to be tested first. Also, testers can identify any missing details that were forgotten by the team as well as any non-functional aspects of the story.
Even though this is a collaborative effort, the testers have a responsibility to write, review, and maintain the acceptance criteria for that user story in a form of a BDD format understandable by all members of the team. The acceptance criteria is a form of criteria that the team must satisfy in order for the story to be considered as finished. Testers must create all the testing tasks related to the story and work on them until all tasks are closed, and therefore the story can be closed assuming there are no outstanding tasks left.
Another agile practice is release panning. It is a meeting held between the team that includes planning for the release of the product in a couple of months ahead. This is when the product backlog is created with all the prioritized stories. It is an initial meeting that can span many sprint planning meetings as the development of the software goes.
Testers in this meeting can discuss the user stories initially as well as set the necessary acceptance criteria. They can also estimate the effort they need to test the stories, identify all the test types that need to be performed as well as provide an initial risk estimation that can affect the later development and testing stages.
An agile practice that is held every 2 weeks is called sprint planning. It is done after the release planning meeting is over. Each team can have N times of sprint depending on the overall release deadline. All the user stories already prioritized by the product owner, the team selects to work in the upcoming sprint.
Testers can add value to the team participating in this meeting. They can:
- Divide the user story to tasks
- Estimate the effort for the tasks
- Plan for automation
- Create acceptance tests for the user stories
- Define detailed risk estimation
Every potential change that can affect the current sprint must be understood by the testers. They need to have a good set of test cases or automation scripts covering all the functionalities, positive and negative ones so that they can reduce any potential regression risks. Since the test scope can be affected, the testers must clarify what areas are affected, who will do the re-testing, on what environment the testing will be performed, and are there any potential risks that can affect the product release.
Continuous Integration Process
This agile practice is a must if the organization wants to be consistent regarding their releases and quality of product increment. Since a product must be developed, reviewed, tested, build, and deployed to the production environment, a pipeline execution of all the activities made by the team should take place at least once a day. The flow of work for one agile team using the automated pipeline containing all the development, testing, build, and deployment activities should consist of the following activities:
- There is a created automation pipeline consisting of all the steps need for one feature to be successfully shipped to production
- Developers push the code which is statically tested and the unit tests are executed
- Functional tests created by the testers are run afterward
- Deployment of the feature takes places after the integration tests
- Report from the build is generated
We’re focusing on the third step which is functional tests created by the testers. Agile development strongly encourages testers with automation knowledge whether it is on the API or UI level. Therefore, testers are creating and executing these tests as part of the continuous integration automation process together with all other activities sequentially. They need to cover as many test cases as possible to be sure that no edge case was accidentally missed.
If testers have what it called a “good automation regression test suite”, they can be sure that any change request made from the customer that can affect the system in any way, can be found quickly and be effectively resolved. Also, they can focus on performing exploratory tests for the upcoming features while the already existing features are automated and can be run by every member of the team at any given time.
This testing activity is very important because:
- Provides quick feedback to the developers about their code quality
- Detects issues quickly after every new build
- Avoids doing manual repetitive tests over and over again
- Reduces potential regression risks
The last but not least agile practice that testers need to participate in and add value, is called retrospective meeting. This meeting is held at the end of each sprint and the goal of this meeting is wrapped in three simple questions:
- What was successful in the last sprint?
- What can be improved?
- Action points for improvements?
This meeting can affect the testing activities like:
- Is the quality of the test cases good enough?
- Are the tools sufficient for testing?
- Is the test effectiveness at a good level?
- Is the testing team satisfied with the tasks at hand?
All of these testing topics can occur in a retrospective meeting. Testers should be part of the meeting and discuss all the things that they did well, all the issues they faced whether it is on a technical side or client communication side, relationship issues with the members of the team, etc. If they successfully address these questions, the team would learn from their mistakes and achievements and can work towards having a successful sprint.
Share This Post
- Integrate your GitHub tests with Jenkins in 5 simple steps
- How IoC in test automation can simplify your test framework?
- Complete your Page Object Model structure with the use of Page Factory pattern
- 6 factors to have in mind when building a successful test automation strategy
- End-to-End Test vs Integration Test