Explore how Agile projects really end—through reviews, releases, and reflection—and why knowing when to stop is just as important as how you start.

If you’re not embarrassed by the first version of your product, you’ve launched too late.
Reid Hoffman, LinkedIn Co-founder
Key terms
Developer testing: Testing done by the person doing the work to catch problems early, before anything is handed over.
User acceptance testing (UAT): Final checks done by real users to confirm whether the product meets their needs and solves the right problem.
Continuous Integration (CI): A process where new code is automatically tested and merged into a shared system several times a day.
Continuous Deployment (CD): A setup that allows tested changes to be automatically released to users without needing a big release day.
Feature flags: Tools that let teams turn new features on or off without releasing new code.
Canary deployments: Releasing changes to a small group first to see how they go before rolling out to everyone.
Sprint review: A session where the team shows stakeholders what was delivered and discusses what should come next.
Sprint retrospective: A team-only reflection to look at how the sprint went and what could be improved for next time.
Technical debt: Extra work caused by shortcuts or incomplete solutions that need to be fixed later.
8.7.1: Testing in Agile – fast, frequent and built in
In traditional project management, testing is often planned at key intervals to control quality and reduce the cost of change.
Agile has the same goal—early detection, fewer defects—but accelerates the cycle. With new features being released every sprint, quality needs to be checked early, often, and in small, manageable pieces.
That’s why testing in Agile isn’t a single process. It’s a layered practice that runs throughout delivery.
Developer testing: The first line of defense
Testing starts with the person doing the work. This could be:
- A developer writing unit tests
- A service designer testing a prototype
- A policy officer checking a draft process with a real user
The goal is simple: catch problems early while they’re still small, local, and easy to fix.
This kind of testing might include:
- Unit tests – Does each individual piece work correctly?
- Peer review – Has someone else looked at it with fresh eyes?
- Test cases or checklists – Have all the key conditions been met?
Agile testing might be lighter on paperwork, but not on standards. Developer testing becomes part of the work as opposed to a final step.
User testing: Does it work for them?
The second layer is User Acceptance Testing (UAT). This usually happens toward the end of a sprint or just before release.
Real users—or people who represent them—interact with what’s been delivered and confirm whether it meets their needs. UAT helps answer questions like:
- Is this what we asked for?
- Can we actually use it?
- Does it create any new issues we didn’t expect?
In Agile, you don’t wait until the whole project is done to get user feedback. You test small parts with users early, so we can adjust while there’s still time.
UAT is also a great source of new or refined backlog items. When users test features, they often spot needs that weren’t considered yet or provide clearer input for future development.
For example, after testing a new data entry screen, users might say, “It works—but can we have an auto-save feature?” That might not block release, but it’s a valuable enhancement that goes straight into the backlog for prioritization.
UAT helps ensure that what’s being delivered is useful, usable, and actually solves the right problem, while keeping future improvements flowing.
Why it matters
Skipping or rushing testing is one of the fastest ways to build up technical debt—hidden issues or unfinished work that cost more to fix later. Testing early and often helps prevent that and reduces rework down the line.
It also supports faster delivery. When you trust the quality of your work, you don’t need to pause or backtrack. Every sprint becomes a usable release point.
Agile teams don’t leave testing to the end—because they don’t leave anything to the end. Every sprint is a chance to deliver something real, and that only works if it’s tested, trusted, and ready to go.
But even the best testing can’t catch everything. Some issues only show up when the work goes live, is seen at scale, or meets real-world use. That’s why how you release your work matters just as much as the testing behind it.
In the next lesson, we’ll explore how Agile teams release their work, and how tools like CI/CD, feature flags, and canary deployments help reduce risk, maintain quality, and keep momentum.
8.7.2: How Agile teams release
Even with great testing, things can still go wrong when new features are released into the real world. Live systems behave differently. Edge cases emerge. Real users sometimes try quite bizarre things that no one thought of. That’s why how you release is just as important as what you release.
Agile teams use a few key practices to manage that process:
CI/CD (Continuous Integration / Continuous Deployment)
This is the backbone of modern Agile delivery. It’s a set of tools and workflows that help teams build, test, and release software quickly and reliably.
- Continuous Integration (CI) means that completed code is regularly merged into a shared system, sometimes multiple times a day. Each change is automatically tested to make sure it doesn’t break anything.
- Continuous Deployment (CD) means changes that pass testing can go live automatically (or with minimal effort), without waiting for a big release day.
CI/CD is especially useful in Kanban or continuous delivery environments, where teams don’t work in sprints and may release features whenever they’re ready.
But it’s also used by Scrum teams, where it helps automate testing and deployment at the end of each sprint, reducing risk and manual work.
Think of CI/CD as a conveyor belt—once a feature passes all its checks, it’s ready to ship.
Feature flags
Feature flags (sometimes called toggles or switches) let teams hide or show new functionality without needing a separate release.
For example:
- You release a new feature to production, but it’s turned off.
- You toggle it on for testers or a small group of users.
- If something goes wrong, you turn it off instantly—no rollback needed.
This makes it easier to test real features in real environments without risking your entire platform. It also supports gradual rollout, A/B testing, and fast recovery.
Canary deployments
A canary deployment is when you release a new version to a small group of users first, just like coal miners used to take canaries underground to warn them if the air was toxic.
If the canary group experiences issues, you pause or roll back. If everything’s smooth, you expand the release to everyone else.
It’s a smart way to:
- Catch bugs that only show up at scale
- Measure performance impacts before full rollout
- Reduce the blast radius if something fails
Combined with feature flags and CI/CD, canary deployments help teams ship confidently, without needing a full safety net.
In the next lesson, we’ll explore what happens after the release: sprint reviews, retrospectives, and the conversations that keep Agile teams improving every step of the way.
8.7.3: Sprint reviews and retrospectives
Two of the most effective ways Agile teams learn are the sprint review and the retrospective.
Both happen at the end of every sprint. But they’re not the same thing, and they’re definitely not optional extras. Done well, they keep your team aligned, focused, and continuously improving.
The sprint review is where the team shares what they’ve built with stakeholders and gets feedback. In a good sprint review:
- The team walks through what’s been delivered
- Stakeholders ask questions or suggest improvements
- Together, everyone reflects on what should go into the backlog next
It’s a key checkpoint to ask:
- Is this what we need?
- Are we still heading in the right direction?
- What should we prioritize now?
Unlike a handover meeting in traditional projects, the sprint review isn’t about sign-off. It’s about engaging stakeholders and using their feedback to shape what comes next.
The sprint retrospective, on the other hand, is only for the team. It’s a short, honest reflection on how the sprint went—not the work itself, but how the team worked together.
You might cover:
- What went well?
- What didn’t?
- What do we want to try differently next time?
This is where real improvement happens. Maybe stand-ups have been dragging. Maybe the team is taking on too much. Maybe handovers between roles are slowing things down. The retrospective is your chance to surface these issues and commit to small, realistic changes.
Importantly, retrospectives are not blame sessions. As with all the best team meetings, they’re built on psychological safety. Everyone should feel free to speak up, without fear of judgment. That’s how trust grows, and how good teams get even better.
Why this matters
Unfortunately, reviews and retrospectives are often skipped when things get busy. Ironically, though, that’s when they’re most needed.
If you’re always delivering but never reflecting, your team risks burning out, drifting off track, or repeating the same mistakes. These two rituals create space to:
- Test assumptions with users
- Adjust plans based on real feedback
- Strengthen collaboration and morale
As such, they are valuable exercises to undertake in any project methodology, whether Agile, predictive or hybrid.
In the next lesson, we’ll take a look back at the Agile lifecycle, and why “closing” in Agile often looks more like evolving.
8.7.4: When Agile projects actually end
At a glance, Agile projects never seem to end.

If you look at the Agile lifecycle, it’s easy to see why. You start with a vision, turn that into user stories, build a backlog, sprint, review, reflect, and do it all again.
The cycle repeats. Feedback goes in, improvements come out. Over and over.
None of these iterations requires a final “go-live.” In fact, many Agile projects release continuously from Day 1. That’s the point. The outcome isn’t an event—it’s a working solution, released, refined, and released again.
So how do Agile projects end?
In traditional project management, a project ends successfully when the defined scope is delivered and the client formally accepts the outcomes.
Many Agile projects conclude in a similar way, especially when the product is stable and functioning as intended. In these cases, the work transitions to business-as-usual (BAU) operations. The Agile team hands off responsibility to a support or service delivery team that manages the system going forward.
For example, a government department builds a new permit application platform using Agile. Once the core functionality is live and users are regularly interacting with it, the delivery team hands it over to the internal IT service desk for ongoing support and maintenance.
That said, Agile projects can end for other reasons, too. Because they’re iterative and value-driven, Agile projects are often wound down when:
Time runs out
For example, the team was funded for a 12-month development cycle, and now the 12 months are up.
The money runs out
The budget has been used, and no further investment is planned.
The value runs out
The original problem has been solved, and building more features would deliver little additional benefit—think of the law of diminishing returns.
The market changes
User needs shift, priorities change, or a competitor releases a better solution. Sometimes, too, the project no longer aligns with the organization’s strategic direction.
But not all Agile projects end with a bang.
Sometimes, they just… drift. There’s no final sprint. No handover. No announcement. The team keeps going through the motions, but it’s not clear who’s still using the product, what problem they’re solving, or whether anything meaningful is being delivered.
When a project lacks a clear endpoint, trouble follows:
Teams lose purpose
Without a defined goal or finish line, work becomes routine. Progress stalls, and motivation dips.
Stakeholders drift away
Without clear outcomes or updates, they lose interest or stop believing the work is worth supporting.
Technical debt builds up
Code and systems evolve without strategy. Bugs get patched but not fixed. No one stops to ask whether the product still makes sense.
Costs creep up
Without a formal close-out, budgets get blurry and accountability fades.
It takes courage to end a project, especially one that hasn’t lived up to expectations. But letting a project quietly fade out isn’t just indecisive—it’s expensive.
We’ll look at this in more depth in Unit 12, where we explore how to end projects that didn’t go to plan.