Tag
Estimation
- Workflow
When the Numbers Don't Matter: Why Teams Miss Deadlines Despite Perfect Estimates
TL;DR
Agile estimation challenges are rarely about the number. Planning poker is useful when teams treat vote spreads as signals about scope, risk, and dependencies. Software team alignment during estimation improves sprint predictability more than chasing velocity. Velocity alone cannot forecast capacity because context changes across sprints. Coordination work must live in Jira as first-class items, or retro actions will not get done. Easy Agile TeamRhythm keeps planning and estimation in one place: story mapping for sequence, planning poker on the issue for shared context, and retrospective actions that turn into tracked work. The new ebook, Guide to Building Predictable Delivery with Jira in 2026, explains how to plan with clarity, align on effort, and turn problems into progress. The outcome is fewer rollovers, clearer handoffs, and more reliable delivery.
---
Estimation in software teams has become a performance ritual. Planning poker sessions run smoothly, story points get assigned, and even velocity charts trend upward.
Yet research analysing 82 studies found five recurring reasons why estimates fail: information quality, team dynamics, estimation practices, project management, and business influences.
The point is, the problem with estimations runs deeper than accuracy - it's about what teams miss whilst focusing on the number.
When a team estimates a story and three people say it's a 3 whilst two say it's an 8, that spread contains more value than whatever final number they settle on. The disagreement signals different assumptions about scope, reveals hidden dependencies, exposes risks before code gets written. But most teams treat the spread as a problem to resolve rather than intelligence to extract. They discuss just long enough to reach consensus, record the number, move on.
The estimation ritual runs perfectly, while the coordination that estimation should enable never happens.
Why the number misses the point
Communication, team expertise, and composition decide how reliable an estimation will be, far more than the technique itself.
A team of senior engineers who've worked together for years will generate different estimates than a newly formed team with mixed experience, even looking at identical work. Neither set is wrong - they reflect different realities.
Problems emerge when organisations ignore this context and treat estimates as objective measurements.
Story points get summed across teams. Velocity gets compared across squads. Estimates meant to help one group coordinate become data points in dashboards, stripped of the shared understanding that gave them meaning.
What planning poker actually reveals
Planning poker works when teams use it to promote collaboration, uncover risks, and address uncertainties proactively. Those benefits vanish when teams rush past disagreement to reach a number.
Someone who estimates low might:
- Know a shortcut from previous work
 - Have solved something similar before
 - Understand a technical approach others haven't considered
 
Someone who estimates high might:
- Have spotted an integration challenge
 - Questioned an assumption in the requirements
 - Remembered something that broke last time
 - Identified a missing dependency
 
Both pieces of knowledge matter more than splitting the difference.
Teams that skip this interrogation lose their only reliable way to discover what they don't yet know. Later, when work takes longer than estimated, they blame the estimation technique. They should blame the coordination failure that happened during estimation.
How team members add context
Research indicates that even team members whose skills aren't needed for an item contribute valuable questions during planning poker.
The database person asks about data volume. The designer notices a missing edge case. The platform maintainer flags a version incompatibility.
None of them are trying to make the estimate bigger to protect themselves. They're raising legitimate technical considerations that affect how much work is actually involved. These questions reveal real complexity the team needs to account for. That's different from someone saying "Let's call it an 8 instead of a 5, just to be safe" without any specific reason (that's called "padding", and a team must avoid that at all costs).
When estimation becomes a solo activity or a quick management decision, all that context disappears. Work looks simpler than it is, not because anyone lied, but because key voices never got heard.
Why team coordination matters more at scale
Coordination ranks as one of the biggest challenges in large-scale software projects, particularly with complex codebases and multiple teams working simultaneously. As organisations scale, coordination problems increase:
- Dependencies multiply across teams. What used to be a conversation between two people in the same room now requires checking another team's roadmap, finding out who owns a component, and waiting for their sprint to align with yours.
 - Handoffs increase between specialists. A feature that one full-stack developer could build now needs a frontend specialist, a backend specialist, a platform engineer, and a data analyst - each working on different schedules with different priorities.
 - The margin for error shrinks. When you're coordinating work across five teams instead of one, a single miscommunication or missed dependency can block multiple teams at once, not just delay one work item.
 - Assumptions travel further from their source. The product manager who spoke to the customer isn't in the room when the backend developer makes a technical decision. The context that shaped the original requirement gets lost through layers of handoffs and documentation.
 
Estimation sessions are often the only moment when everyone involved in delivering work actually talks about it before starting. Treating that time as a box-checking exercise to produce velocity data is one of the biggest mistake you could make as a team.
When estimates protect plans instead of revealing risks
Let's look at a familiar scenario. A team commits to eight features based on historical velocity. Three sprints in, two features ship and the rest need another month. Marketing campaigns get delayed. Customer pilots get postponed. Executive presentations need rewriting.
The team "missed commitment," so stakeholders lose confidence.
Next quarter, product managers add extra weeks to every timeline just to be safe. Engineering leaders give longer forecasts they know they can beat rather than their honest best estimate. Everyone protects themselves, the system slows down, trust erodes further.
Now, step back - why did the team commit to eight features in the first place? Usually because velocity suggested they could complete them.
The velocity number reflected what they'd delivered in the past. What it didn't reflect was whether those sprints had taught them anything, whether dependencies were worsening, whether technical debt was compounding, or whether the work ahead resembled the work behind.
Why velocity numbers mislead
Velocity treats delivery capacity as stable when it's dynamic.
Teams get faster as they remove friction and slower as complexity increases. Last sprint's number is already outdated, yet sprint planning sessions treat it as a reliable forecast.
Consider what velocity cannot tell you:
- Whether the team learned from previous mistakes
 - Whether technical debt is slowing new work
 - Whether dependencies have become more complex
 - Whether the upcoming work requires unfamiliar technology
 - Whether key team members will be available
 
Better agile estimation techniques won't fix coordination problems. But teams can fix coordination problems if they treat coordination itself as work that deserves attention.
Why fixing coordination problems matters for estimation
Teams that have messy estimation sessions usually know exactly what's wrong.
Someone will say it in standup: "We should really refine stories before planning."
Another person mentions it in Slack: "We need to check dependencies earlier."
It even comes up in the retrospective: "Our estimates are all over the place because we don't understand the work."
The insights are there. What's missing is follow-through.
Look at your last three retrospectives. How many action items got completed before the next one?
The average completion rate for retro action items sits around 0.33% - teams basically complete almost none of them. And that's mainly because the improvement actions exist outside the system that schedules work.
During retrospectives, teams identify real problems:
- "Our estimation sessions run too long because we don't refine stories beforehand"
 - "We keep getting surprised by infrastructure work that's not in our backlog"
 - "We never follow up on the spikes we create"
 - "Dependencies block us mid-sprint because we didn't check capacity with other teams"
 
All true. All important. Then the retrospective ends, everyone returns to their sprint board, and those insights have nowhere to go.
The backlog contains features. The sprint contains stories. And improvement work lives in meeting notes, if anywhere. So it doesn't compete for capacity, doesn't get assigned an owner, doesn't get tracked, and doesn't get done.
How to make improvement work visible
Usage data from Easy Agile TeamRhythm showed teams completing only 40-50% of their retrospective action items. After releasing features to surface and track incomplete actions, completion rates jumped to 65%.
The mechanism was simple:
- Convert retrospective insights into Jira work items
 - Give each one an owner
 - Slot them into upcoming sprints like any other work
 - Make incomplete action items from the previous retros more visible
 
When teams actually complete their retrospective actions, estimation gets better without changing the estimation technique. Stories get refined before planning, so there's less guesswork. Dependencies get checked earlier, so there are fewer surprises. The team builds shared understanding over time, so the spreads in planning poker get narrower and the discussions get shorter.
As you can see, this is a ripple effect.
Coordination problems compound when ignored, but they also improve when treated as real work.
A team that never fixes its planning problems will have messy estimation sessions next quarter, and the quarter after that. A team that deliberately improves coordination will gradually need less time to reach better software team alignment and their estimates will become more reliable as a byproduct.
What this means for tools and practice
Most planning tools were built to manage tasks, not support the conversations that help teams coordinate around those tasks. The challenge now is making coordination visible and trackable in the same place where delivery work happens.
Easy Agile TeamRhythm keeps coordination work alongside delivery work.
✅ User Story Map shows how work connects to goals, making sequencing a shared discussion instead of someone's private spreadsheet.
✅ Planning poker happens inside the Jira issue, where everyone sees the same acceptance criteria and attachments - so the spread of estimates becomes a trigger for conversation, not a number to agree on.
✅ Retrospective actions convert directly into backlog items that get added into upcoming sprints, so improvement work gets the attention it deserves.
The stakes for getting this right are rising. Atlassian's push towards AI assistance means your planning documents will be read by more people and processed by more systems. When Rovo searches your Jira instance, it surfaces whatever you've written - clear goals and explicit dependencies, or vague titles and hidden assumptions.
A fuzzy sprint goal doesn't just confuse your team. It confuses the dozen people across three time zones reading your board asynchronously, the automation trying to identify relevant work, the AI assistant trying to generate status updates, and the stakeholders trying to understand progress.
Where this leads
The teams that succeed over the next few years won't be the ones with the most sophisticated agile estimation techniques or the highest velocity scores.
They'll be teams where planning poker uncovers risks instead of producing commitments. Where retrospectives generate work items instead of wishful thinking. Where coordination shows up in Jira instead of getting discussed in conversations that disappear.
They'll be teams that figured out estimation was never about the numbers. It was always about getting everyone pointed in the same direction, with their eyes open, before the work starts.
The number is just what gets written down. The conversation that takes the team to the number is where the real work lies.
 
- Workflow
Why Team Planning Feels Harder Than It Should (and What To Do About It)
TL;DR
Sprint planning feels harder because distributed/async work, tool consolidation, and Atlassian AI expose priority noise, software estimation problems, and hidden dependencies. This guide shares three team planning best practices - set a clear order of work, estimate together to surface risk, and close the loop with action-driven retrospectives, so team alignment and team collaboration improve and plans hold up.
---
Sprint planning should not feel like a shouting match where the loudest voice wins. Yet for many teams it has become a long meeting that drains energy, creates a weak plan, and falls apart by Wednesday.
The problem is not that your team forgot how to plan. The world around the plan changed. People work across time zones. More decisions happen in comments and tickets than in rooms. And the plan lives in tools that other people (and AI‑powered search) will read later. When the audience shifts from "who was in the meeting" to "everyone who touches the work", the plan must be clearer and better organised.
On one hand, AI and asynchronous collaboration are on the rise. On the other, we have a direct link between strong delivery capabilities and better performance. And those two lines meet in planning - if inputs are unclear, you simply do the wrong things faster.
We believe planning feels harder because three foundational elements have broken down:
- How priorities turn into an order of work,
 - How estimates show risk (not just numbers), and
 - How suggestions for improvements turn into real work.
 
When these are weak, planning becomes an exercise in managing metal overload rather than building a clear path forward.
This piece examines why sprint planning has become so difficult, what changed in 2025 to make it worse, and most importantly, how teams can fix those three foundations so your plan still makes sense when someone who missed the meeting reads it two days later.
The mental load of modern planning
"Cognitive load" is simply the amount of mental effort a person can handle at once. Every team has a limit on how much information they can hold at any given moment, and planning meetings now push far past it.
At the same time, teams are asked to:
- Rank features against unclear goals,
 - Estimate work they have not fully explored,
 - Line up with platform teams whose time is uncertain,
 - Balance different stakeholder requests,
 - Figure out dependencies across systems they do not control, and
 - Promise dates that others will track closely.
 
Plans are often made as if everyone is fully available and not also working on existing projects and tasks. And we all know quite well, that is never the case. When the mental load is too high, teams cannot safely own or change the software they're working on.
Planning then becomes a bottleneck where teams spend more time managing the complexity of coordination than actually coordinating. Decisions slip, assumptions go unchecked, and the plan that comes out is not a shared understanding but a weak compromise that satisfies no one.
Where priorities fail
In most planning sessions, "priority" has lost its meaning. Everything is P0. Everything is urgent. Everything needs to happen this sprint. If everything is top priority, nothing is.
Teams struggle to prioritise because there are too many moving parts at once: lots of stakeholders, long backlogs, and priorities that change week to week. Most prioritisation methods like MoSCoW, WSJF, and RICE, work for a small list and a small group, but they stop working when the list and the audience get big. Meetings get longer, scores become inconsistent, and re-ranking everything every time something changes isn’t practical. People also learn to “game” the numbers to push their item up.
There’s a second problem: these methods assume everyone agrees on what “value” means. In reality, sales, compliance, platform, design, and support often want different things. Numeric models (like simple scoring) miss those differences, because some trade-offs (like brand risk, customer trust, regulatory deadlines) are easier to discuss than to put into a single number.
A flat product backlog makes this worse. As Jeff Patton says, reading a backlog as one long list is like trying to understand a book by reading a list of sentences in random order - all the content is there, but the story is gone. Without the story, “priority” becomes a label people use to win arguments rather than a clear order of work.
Put simply, when the work and the number of voices grow, the usual techniques can’t keep up. If you don’t have a way to surface different perspectives and settle the trade-offs, decisions drift from strategy, plans keep shifting because they were never tied to outcomes, and engineers stop seeing why their work matters.
The estimation show
Teams are generally too optimistic about effort and too confident in their estimation numbers. On average, work takes about 30% longer than expected. Even when people say they’re 90% sure a range will cover the actual effort, it only does so 60–70% of the time.
Translation: confidence feels good, but it does not mean accuracy.
The deeper issue is how we estimate. It often becomes a solo guess instead of a shared check on risk.
Here's what actually happens - someone sees a work item called “Update API,” give it 5 points based on the title alone, and the team moves on. No one tests the assumptions behind the number.
Nobody talks about the auth layer changes implied by "update." Nobody brings up the database migration hiding in plain sight. Nobody checks whether the frontend team knows this change is coming.
And when those show up mid-sprint, the plan slips and trust drops.
After a few misses, behaviour shifts for the worse. People start padding their estimates - quietly rounding numbers up to feel safe. Product then starts pushing back. And estimation turns into negotiation, not learning.
A healthier signal to watch is the spread of estimates. A wide spread of estimates isn't a problem to smooth over, but rather a signal to discuss assumptions. Most likely, there will be some difference in the initial estimates, giving each team member a great opportunity to talk about why their estimates were either higher or lower than the others.
The coordination cost of dependencies
When different teams own connected parts of the system, one team’s work often can’t move until another team makes a change. If those teams aren’t lined up, the change gets stuck.
This is common when how the software is wired doesn’t match how the teams are organised.
For example, the code says “Service A must change before Service B,” but A and B live in different teams with different priorities, sprint dates, and intake rules. The code requires coordination, but the org chart doesn’t provide it.
In large organisations, these small, work item‑level links are a constant source of delay. And it has only gotten worse in recent times.
Platform engineering has grown, with most features now touching shared services - auth, data platforms, CI/CD, component libraries. But planning often happens without the platform team in the room. No one checks their capacity, no one tests the order of work, and theres no agreement on intake windows or what to do when something is blocked.
So the plan looks ready on paper. Stories are sized. The sprint is committed. Then, three days in, someone says at stand‑up: “That API won’t be ready until next Tuesday,” or “The platform team is tied up,” or “Friday’s deployment window isn’t available.” Work waits, people wait, and money burns while nothing moves forward.
Dependencies increase complexity in any system. More complexity means more idle time as work passes between people or teams. Developers, then, often end up waiting for other work to wrap up before they can begin, creating inefficiencies that add no value but still cost payroll and budget.
What changed in 2025
Three shifts in 2025 made planning even harder:
1. Distributed work reduced live coordination.
Hybrid days replaced being in the same room every day. More work happens asynchronously. That means your written plans and notes like sprint goals, story maps, dependency notes, must explain what real-time conversations used to cover: why this work matters, what comes first, what won’t fit, who’s involved, and what could block you. Vague goals that could be fixed in person now fall apart across time zones.
2. Fewer tools, one system.
Teams cut vendors to reduce spend. Planning, estimation, and retrospectives moved into one solution, whether they fit well or not. While that reduces context‑switching, it also means that teams would have to lose specialised tools and custom workflows. It also means that stakeholders can see the full line from strategy to stories in one place, so your sprint goals, estimation notes, and retro/improvement actions will be read more closely.
3. Atlassian AI raised expectations.
Atlassian expanded Rovo across Jira and Confluence. Search, governance, and automation now connect conversations to issues and speed discovery. And the thing about AI is that it accelerates whatever direction you're already pointed. If goals are fuzzy, if estimates are guesses, and if dependencies are hidden - the automation will just help you go faster in the wrong direction.
The combination of all these changes is brutal - more coordination required, less overlap to coordinate in real-time, and higher penalties when plans fail because stakeholders can now see more of the picture.
Fixing the foundations: sprint planning best practices
The teams that make planning work have rebuilt their foundations with three planning best practices. They’re simple, written rules the whole team follows. They live on the planning board and the work items, so they still make sense after hand-offs and across time zones.
1. Turn priorities into a clear order of work
“Priority” breaks when people don’t share the same idea of value. The fix is to agree on a single, visible order - why this first, then that, and what won’t fit this sprint.
Teams that get this right:
- Turn goals and outcomes into backlog work on a steady rhythm, not ad-hoc.
 
Once a month, product and delivery confirm objectives and break them into epics and small slices for the next 6–8 weeks. This keeps meaningful work in the pipeline without crowding the backlog assumptions and wishes that will change anyway.
- Write testable sprint goals using a consistent template.
 
Objective, test, constraint. Set clearly defined goals, objectives, and metrics. What is the definition of done? How will the team know if they are successful? You should leave the sprint planning meeting with a clear idea of what needs to get done and what success looks like. For example, "Users can complete checkout using saved payment methods without re-entering card details" is much better than "improve checkout UX" every time. If you can't verify whether you hit it, it's a wish, not a goal.
- Run a 30-minute review before scheduling anything.
 
Agree the order of work before fixing the dates. In 30 minutes with engineering, design, and platform teams: walk through the dependency list, check the capacity, and identify top risks. Output: ordered path to done, clear boundary of what won't fit that sprint, and a simple rule for how you’ll handle blocked items. This surfaces cross-team friction before it becomes a mid-sprint crisis.
- Make dependencies visible where people actually plan.
 
Use one standard dependency field and two to three link types in Jira. Review the highest-risk links during planning - not when they block work on day three.
Easy Agile TeamRhythm's User Story Maps makes this process concrete - Build goals and outcomes at the top, work items that connect to those goals, and sprint or version swimlanes to clearly group them. Turn on dependency lines so blockers and cross-team links show up in planning. When epics ladder to outcomes, when the order of work explains itself, and when dependencies are visible on the same board - you stop rethinking priorities and start shipping.
2. Estimate together to find risk early
Estimation is not about hitting a perfect number. It is a fast way to identify risk while you can still change scope or order. Treat it as a short, focused conversation that makes hidden assumptions visible and records what you learned.
- Estimate together, live.
 
Run Planning Poker from the Jira issue view so everyone sees the same title, description, acceptance criteria, designs, and attachments. Keep first votes hidden and reveal them all together (so the first number doesn’t influence the rest).
Easy Agile TeamRhythm help you run the estimation process in Jira, right where work lives. Capture the key points from the discussion in comments in the issue view. Sync the final estimate number back to Jira so your plan is based on current reality, and not old guesses from two weeks ago.
- Record the reasoning, not just the number.
 
If a story moves from a 3 to an 8 after discussion, add two short notes on the work item:
- What changed in the conversation (the assumption you uncovered), and
- What’s still unknown (the risk you’re facing)
This helps the next person who picks it up and stops you repeating the same debate later.
- Stick to clear, simple scales.
 
Time estimates vary a lot by person. Story points help teams agree on the effort instead. If you ask each team member to estimate the amount of time involved in a task, chances are you'll get 5+ different answers. Timing depends on experience and understanding. But most team members will agree on the effort required to complete a work item, which means you can reach a consensus and move on with your story mapping or sprint planning much more quickly.
Maintain a small set of recent work items (easy/medium/hard) with estimates and actuals. When debates don't seem to end, point to this known work: "This looks like that auth refactor from June - that was an 8 and took six days including the migration script."
- Limit committed work at roughly 80-85% of average capacity.
 
The space makes room for one improvement item, for unavoidable interrupts, and for reality. Setting unachievable goals sets your whole team up for failure. Failing to meet your sprint goals sprint after sprint is damaging for team motivation and morale. Use estimates to set reasonable goals as best you can. Consider team capacity, based on your past knowledge of how long tasks take to complete, how the team works, and potential roadblocks that could arise along the way.
Protect honesty in the estimate numbers. An estimate is a shared view of scope and risk, not a target to enforce. If it needs to change, change it after a team conversation - don’t override it. Remember what we said earlier - when estimation turns into negotiation, people start “padding” (quietly inflating a number to feel safe), and accuracy gets worse.
3. Closed the loop on improvements
Teams often fall into the trap of writing retrospective items as good intentions and not clear actions. Broad notes like “improve communication” or “fix our process” sound fine on paper, but they don’t tell anyone what to do next.
Action items are often ignored during a retrospective. Everyone focuses on engaging people in getting their ideas out, and not as much time is spent on the action items and what's going to be done or changed as a result.
- Start every retro by checking last sprint’s actions.
 
Ten minutes. Did we close them? If closed, did they help? What did we learn? This way, the team acts on what they learned straight away and everyone can see what changed.
- Turn insights into Jira work items immediately.
 
Each action needs an owner, a due date, a link to the related work, and a clear definition of done. If you can’t assign it immediately, it’s not actionable, it’s a complaint.
Easy Agile TeamRhythm's Retrospective lives right inside Jira, attached to your board. Turn retrospective items into Jira issues with owners and dates, link them to an epic or backlog item, and slot at least one into the next sprint. Track incomplete action items and repeating themes on the same page as delivery work.
- Make space for one improvement item every sprint.
 
Pick one action item for a sprint and finish it before starting another, so it doesn't get pushed aside by feature work. Treat action items like a feature: estimate it, track it, and close it with a note about the result and impact.
- Write a short impact note when you close an action.
 
A retro should give people energy. It should help them see that we're improving, that their voice matters, that something got better because of something they said.
Write a one-line impact note for every closed action item, ideally with a small metric - for example, "Batched PR reviews into two daily slots - median review time dropped from six hours to 90 minutes." This teaches the next team, justifies the investment, and shows that retro action items increase the team's capacity to deliver, and are not admin overheads.
What changes when you follow the planning best practices
Teams with solid sprint planning foundations rarely talk about them. They’re not on stage explaining their process. They’re just shipping steady work while everyone else wonders what their secret is.
There is no secret. The best teams have simply stopped treating planning as a meeting to survive and started treating it as a system of best practices that compound over time.
The mental load of planning sessions does not go away. But it shifts. Instead of processing all of it live in a room under time pressure, these teams have recorded their answers into written plans and notes that do the thinking for them.
The user story map explains what matters and why. The order of work shows dependencies before they block work. The estimation scores and notes capture not just numbers but the assumptions behind them. Retro action items sit next to product work, so the next sprint benefits from what the last one learned.
Fix the best practices and your sprint planning challenges will reduce. Not perfectly. Not forever. But enough that planning stops feeling like a crisis and starts feeling like what it should be: a calm view of what is possible now, with simple ways to adjust as you learn.
The cost of unclear plans is rising. AI speeds up what you feed it. Stakeholders can see more details easily. Work happens across time zones. In that world, clarity isn't a nice-to-have - it's a basic requirement.
The teams that will do well in 2026 aren't the ones with better tools or smarter frameworks. They’ll be the ones with a few simple best practices written into the plan and the work items, so handovers are easy, others can check and understand them, and they still make sense after the meeting.
 
- Agile Best Practice
Velocity Starts with Alignment
Velocity is a simple idea that’s often misunderstood. It measures how much work a team has completed in a sprint, and over time, it can show an average that helps teams plan what’s realistic for the next one. It’s a useful guide, but it’s not a goal in itself.
Some teams treat velocity like a target or a competition (and some are pushed to). They try to “set” it higher or compare it across teams, hoping it will prove that they’re getting faster. But velocity is not a budget, a forecast, or a speedometer. It’s a reflection of real progress made by a team working together, not a scoreboard of individual performance.
Used well, velocity helps a team understand their delivery rhythm and make sustainable plans. Used poorly, it can create pressure, encourage over-commitment, and hide the very problems it’s meant to reveal.
That’s why alignment matters. When a team plans together, estimates together, and agrees on what success looks like, velocity becomes a sign of steady progress rather than a race for points.
Where real pressure comes in: leadership, metrics and expectations
Team velocity often becomes a target not because the team wants it that way, but because leadership or stakeholders push for higher numbers. Community threads and agile commentators repeatedly flag this.
1. Metric as performance tool rather than guide
In “Is Measuring Velocity in Scrum Good or Bad for Your Team?” our partners at CPrime warn of risks to team morale and performance when teams are compared based on velocity. When velocity is viewed as proof of output, pressures mount to inflate estimates or cram more work.
2. Unrealistic demands from above
Teams feel the squeeze when leaders look at velocity charts and ask “Why can’t we do more?”. This shifts the burden to the team rather than the planning process. In practice, such demands lead to corners being cut, assumptions going unchallenged, and real issues being hidden. After all, increasing velocity by 20% could be as simple as inflating estimates by 25%.
3. Misuse of comparisons and individual “velocity”
Some leaders want to pit teams against each other or measure individuals. According to the Agile Alliance, “only the aggregate velocity of the team matters, and the phrase ‘individual velocity’ is meaningless.” That misuse causes resentment, gaming metrics, and fractured collaboration.
4. Volatility triggers pressure, not insight
When velocity swings — up or down — leadership often responds with mandates rather than inquiry. But those swings often signal real issues: unclear stories, unexpected dependencies, or overcommitment. Treating them as failures rather than clues deepens the challenge.
What alignment really means
Alignment is not a meeting. It is a shared understanding that connects planning, estimation, and delivery. When a team is aligned, everyone sees the same goal, understands what it will take to achieve it, and recognises where the risks lie.
But if alignment were easy, misalignment wouldn’t show up so often. You can usually spot it when planning feels tense, with people talking past each other, or it goes completely quiet as no one feels confident to speak up. Estimates vary wildly, or work shifts mid-sprint because the team never agreed on what “done” meant. All of this points to the same issue: the team doesn’t yet share a clear, common understanding of the work.
True alignment happens when planning and estimation happen together. Teams discuss what matters, how complex it is, and how confident they feel. Product owners bring the context, engineers share the technical view, and designers help surface dependencies. Together they build a realistic plan that connects the work to the outcome.
Once this shared view exists, estimates and velocity reflect understanding rather than guesswork. The team can plan with more confidence and adapt with less stress. Alignment is what yields the real progress.
How alignment is built in practice
Alignment doesn’t happen by accident. It’s shaped through open conversations, shared visibility, and habits that keep everyone working from the same plan. Tools support this, but alignment comes from how people use them together.
1. Start planning from shared priorities
Begin with what matters most. Sprint goals or high-level initiatives help anchor the discussion before you get into the detail of breaking down tasks. When everyone sees how each story connects to an outcome, decisions stay grounded in value and you reduce the opportunity for strong opinions to have undue influence.
2. Estimate as a team, not as individuals
The benefit of estimation comes from the conversation and the opportunity to share understanding. When people share their view of effort and complexity, hidden assumptions can surface early and be clarified. The Planning Poker feature in TeamRhythm makes this easy to run inside Jira, keeping the discussion focused on the work itself.
3. Keep priorities visible and current
Goals that can’t be seen are quickly forgotten, but a live view of the sprint in Jira helps everyone see what’s next, what’s at risk, and what’s already done.
With TeamRhythm, teams can set clear goals for each sprint or iteration and see how every story connects to those goals. User stories sit under epics, showing at a glance how work is grouped and how each piece contributes to the bigger picture. The story map view keeps this visible to everyone, without adding extra admin.
4. Revisit alignment often
Alignment isn’t something you agree on once and then take for granted; it needs small, regular check-ins. The daily stand-up is a great time to do this. Use stand-ups to confirm what still matters most, discuss any new dependencies that have surfaced, and make quick adjustments before things slow down.
When you keep alignment visible in this way, it becomes part of how you work rather than another meeting to run. The plan stays shared, delivery feels steadier, and progress is easier to trust.
Turning alignment into confidence
Alignment is about giving people the clarity and trust they need to work as a team with confidence. When the whole team understands what matters, what’s achievable, and how their work contributes, they can move forward with focus instead of hesitation.
That shared understanding encourages open conversations, early problem-solving, and flexibility when things change. People feel comfortable speaking up because they know their perspective helps shape the outcome. Those are the building blocks of steady progress.
With that kind of clarity, velocity becomes a reflection of how well the team works together rather than how quickly they move. The numbers stop creating pressure and start showing evidence of reliable delivery.
Confident teams make thoughtful decisions, adapt to change without losing direction, and keep delivering work that truly matters. That’s what alignment makes possible.
 
- Agile Best Practice
The Hidden Costs of Agile Anti-Patterns in Team Collaboration
TL;DR
Anti-patterns in agile feel familiar, but often quietly undermine progress. In this guide, we explore five common collaboration traps: large user stories, forgotten retro actions, superficial estimation, premature "done" labels, and ceremonial agility. You'll learn how to recognise, understand, and experiment your way out of them.
The Comfort Trap: Why Familiar Agile Habits Hold Teams Back
In agile, anti-patterns don’t announce themselves. They slip in quietly, posing as good practice, often endorsed by experience or habit. Over time, they become the default - until velocity stalls, engagement dips, and retros feel like re-runs.
In our conversations with seasoned coaches and practitioners across finance, government, consumer tech, and consultancy, we realised one thing - anti-patterns aren’t just a team-level concern. They signal deeper structural misalignments in how organisations think about work, feedback, and change.
To protect the privacy of our interviewees, we’ve anonymised company names and individual identities.
Let’s unpack a few of the most pervasive anti-patterns hiding in plain sight, and how to shift them without disrupting momentum.
1. The Giant User Story Illusion
Large User Stories: Oversized tasks that delay feedback and blur team accountability.
"It felt faster to define everything up front. Until we got stuck." - Product Manager, global consumer organisation
Large user stories promise simplicity: one place, one discussion, a broad view stakeholders can get behind. But when delivery starts, the cracks widen:
- Estimations become guesswork.
 - Feedback loops stretch.
 - Individual contribution becomes unclear.
 
In many teams, the difficulty isn’t about size alone - it’s about uncertainty. Stories that span multiple behaviours or outcomes often hide assumptions, making them harder to discuss, estimate, or split.
Symptoms
- Stories span multiple sprints.
 - Teams lose clarity on progress and ownership.
 - Estimation sessions are vague or rushed.
 
Root Causes
- Pressure to satisfy stakeholder demands quickly.
 - Overconfidence in early solution design.
 - Lack of shared criteria for 'ready to build'.
 
Remedy
Break stories down by effort, known risks, or team confidence. One team created their own estimation matrix based on effort, complexity, and familiarity—grounding pointing in delivery, not abstraction.
See also: The Ultimate Guide to User Story Mapping
2. Retro Amnesia: Action Items with No Memory
Incomplete Retro Actions: Items raised in retrospectives that quietly disappear, losing learning and team trust.
"We come up with great ideas in retros, but they disappear." - Agility Lead, multinational financial institution
When teams can’t see which actions carried forward, improvement becomes accidental. One coach described manually collecting and prioritising past action items in a Notepad file - because nothing in their tooling surfaced incomplete actions by default.
Worse still, valuable decisions get revisited unnecessarily. Teams forget what they tried and why.
Symptoms
- Recurring issues in retros.
 - Incomplete actions vanish from view.
 - Team energy for change drops over time.
 
Root Causes
- Retros run out of time before reviewing past items.
 - No tooling or habit for tracking open actions.
 - Actions lack owners or timeframes.
 
Remedy
Surface incomplete actions in one place and track progress over time. Revisit context: what triggered the decision? What outcome did we expect?=
3. Estimation Theatre: When Story Points Become Currency
Story Point Anchoring: The habit of assigning consistent points to avoid conflict, not to clarify effort.
"The team got used to anchoring around threes. Everything became a three." - Agile Coach, public sector agency
Story points should guide shared understanding, and not become a measure of performance or predictability. But many teams fall into habits:
- Anchoring to previous estimates.
 - Avoiding conflict by picking the middle.
 - Gaming velocity for perceived consistency.
 
Symptoms
- Homogeneous story sizes regardless of work type.
 - Few debates or questions during pointing sessions.
 - Velocity becomes the focus, not team clarity.
 
Root Causes
- Misuse of velocity as a performance metric.
 - Comfort with consistency over conflict.
 - Absence of shared understanding of story complexity.
 
Remedy
Reframe estimation as shared learning. Encourage healthy debate, try effort/risk matrices, and use voting to explore perspective gaps.
4. The "Done Means Done" Shortcut
False Completion: Marking items “done” when no meaningful progress was made.
"We mark items as done, even if we didn’t act on them." - Scrum Master, insurance and data services firm
Marking something "done" in order to move forward can feel pragmatic. But it hides reality. Was the issue resolved? Deferred? Invalidated?
Without clear signals, teams lose the ability to reflect truthfully on what’s working. One team described starting every retro with a conversation about what "done" actually meant, and adjusted their practices based on whether action was taken or just abandoned.
Symptoms
- Completed items have no real impact.
 - Teams disagree on whether actions were truly resolved.
 - Follow-up problems recur with no reflection.
 
Root Causes
- Ambiguity in what "done" means.
 - Lack of closure or accountability for actions.
 - Reluctance to acknowledge when something was dropped.
 
Remedy
Introduce a "no longer relevant" tag for actions. Start every retro by reviewing outcomes of previous actions, even if abandoned.
5. Anti-Patterns in Disguise: Agile vs Agile-Like
Ceremonial Agility: Teams follow agile rituals but avoid meaningful feedback, adaptation, or empowerment.
"We're agile. But we also push work through to meet delivery at all costs." - Project Manager, large enterprise tech team
Many teams operate in agile-like environments: sprints, boards, and standups, but decision-making remains top-down, and trade-offs go unspoken.
This hybrid approach isn't inherently bad - context matters. But when teams inherit agile ceremonies without agile values, collaboration becomes box-ticking, not problem-solving.
Symptoms
- Teams follow agile ceremonies but avoid real collaboration.
 - Delivery decisions made outside of sprint reviews.
 - Retrospectives focus only on team morale, not system change.
 
Root Causes
- Agile adoption driven by compliance, not culture.
 - Delivery commitments override learning and adaptation.
 - Leadership sees agile as a process, not a mindset.
 
Remedy
Is your agile framework enabling change - or disguising command-and-control? Use retros and sprint reviews to discuss system constraints. Ask what’s driving the way work flows, and who has the power to adjust it. Make trade-offs visible and shared.
Spot the Signs, Shape the Shift
Anti-patterns don’t mean your team is failing. They mean your team is learning. The most resilient teams are the ones that catch unhelpful habits early, and have the safety and support to try something else.
Retrospectives are the perfect place to surface them - as long as they’re structured for memory, not just reflection.
In the end, anti-patterns aren’t the enemy. Silence is.
Want to take action?
Try this in your next retro:
- Surface 1 anti-pattern the team has noticed (e.g. big stories, unfinished actions, silent standups).
 - Ask: Why might this have emerged? What need did it originally serve?
 - Run a one-sprint experiment to shift it. Keep it small.
 
The cost of anti-patterns isn’t just inefficiency. It’s losing the opportunity to get better, together.
 
- Agile Best Practice
Why Collaboration Gets Harder as Teams Scale
Collaboration in large-scale organisations often reveals friction in places teams expect to run smoothly. As product and development functions scale, the number of moving parts increases. So does the risk of misalignment.
At Easy Agile, conversations with our customers frequently surface familiar challenges. While each organisation is unique, the core struggles of collaboration are shared. To protect the privacy of the teams we spoke to, we’ve anonymised all quotes. But every insight is real, direct from the people doing the work.
This post is for anyone navigating the complexity of scaled collaboration, whether you're leading a team or working within one. Sometimes the hardest part is seeing the problem clearly. These are the patterns teams are running into, the questions they’re wrestling with, and the cracks that emerge when planning, alignment, and communication break down. Understanding and acknowledging these issues is the first step toward solving them.
Here’s what teams are experiencing and the key questions they’re grappling with as they scale collaboration.
TL;DR – Common collaboration challenges in scale-ups and enterprises:
- Teams struggle with communication and alignment, especially when working across multiple teams or departments
 - Managing cross-team dependencies is a significant challenge, often causing delays and requiring frequent coordination
 - Capacity planning and skill allocation are difficult, particularly when teams have to balance project work with ongoing operational tasks
 - Teams face challenges in breaking down work effectively and maintaining visibility of progress across different teams
 - Frequent changes in priorities and scope creep disrupt team planning and execution
 - There are difficulties in translating high-level strategy into actionable team priorities and objectives
 - Teams struggle with effective retrospectives and continuous improvement processes
 
What breaks down in cross-team communication?
Communication challenges tend to intensify with scale. As soon as multiple teams are involved, misalignment becomes more likely. A Senior Product Manager from a global HR tech firm described a pattern many teams will recognise:
"One of the main themes I heard in conversations with leadership was the lack of process, transparency, visibility, and dependency tracking. It’s always been manual across teams. We’ve done a really good job, but there’s an opportunity to do better."
Another team member highlighted how this disconnect tends to grow over time:
"At the start of each quarter, our conversations are strategic and cross-functional, involving sales and strategy teams. But as we dive deeper into execution, communication shrinks down to daily engineering huddles, and essential alignment details often get lost."
The problem isn't a lack of communication, but rather a shift in its focus. When delivery takes centre stage, strategic context gets sidelined. When teams move into execution mode, that shift in communication cadence creates blind spots across departments, leading to confusion, duplicated work, or misaligned outputs.
Why is managing dependencies across teams so difficult?
Dependencies create friction when they aren’t visible or clearly owned. Coordination across teams can be derailed by unclear sequencing, late handovers, or competing timelines. An Agile Coach at a financial institution shared:
"We had to run bi-weekly cross-program dependency calls just to stay on top of what was blocking who. We just list dependencies manually, there isn’t any unified visibility. At the ART level, it’s a mix of RTEs, Scrum Masters, and team members trying to link things, but beyond that, it falls apart"
A delivery leader at a global credit bureau reinforced the limitations of existing tools:
"I’ve never successfully been able to really tackle dependency visualization and put a process around that. It's always been manual. When I'm speaking to an executive, that means something... But when I'm speaking to someone on an agile team, it changes as it rolls up...Without proper plugins, even a robust tool like Jira struggles to provide clear dependency visuals. Planning becomes complicated quickly, leaving teams stuck."
Dependency risk increases when shared work isn’t tracked or visualised in a way that’s accessible to all stakeholders. Teams need to see not just their own work, but how it connects with others. Teams need more than awareness - they need shared visibility, clarity on ownership, and consistent ways to plan around dependencies.
How do teams manage capacity when demands keep shifting?
Planning team capacity isn’t just about headcount, but also about competing demands. Teams are often asked to deliver roadmap initiatives while supporting legacy systems, resolving production issues, or addressing technical debt. A product leader from a cybersecurity company shared:
"We’re always trying to achieve a lot with limited resources, and it makes roadmapping really difficult. We’ve made progress in estimating the team's bandwidth more accurately by looking at what they actually delivered last quarter. But we still hit the same issue - too many topics, too little time."
Another team shared how they introduced tighter prioritisation controls using a third-party tool, but even rigid structures have their limits:
"We use XXX as a source of truth for prioritisation. We have around 80 different initiatives prioritised from 1 to 80 of importance... no meeting can be scheduled if the project is not approved in the tool."
This helped formalise approvals and reduce noise, but it also revealed a deeper issue. Even with a strict gating process, the volume of initiatives stayed high, and prioritisation alone couldn’t solve for limited capacity. Clearer structures don’t automatically reduce the demand on teams or ease delivery expectations. That tension persists unless strategic scope is also narrowed.
What makes work breakdown and visibility so hard to maintain?
Breaking down initiatives into independent, testable stories is not always straightforward, especially when scope is uncertain or spans months. A software engineer working across multiple teams explained:
"Breaking work down is hard - some teams still think in layers. They say, ‘This only delivers value when the whole thing’s done.’ On top of that, we often run big planning in a five-hour day or stretch it awkwardly over two days. Third parties and shared services don’t get folded into teams, which makes breakdown and clarity harder."
Large epics often outlive the context in which they were created. As scope evolves, teams may struggle to maintain clear acceptance criteria and shared understanding.
An Agile Coach reinforced how hard it is to keep sight of progress:
"We break each story into smaller pieces as much as possible where it's testable by itself so the testing team can test it... But if it’s a lengthy project, spanning more than two months, it’s easy to lose clarity and effectiveness...Consistently tracking actions across multiple sprints involves endless toggling. It's difficult to quickly understand what's truly improving and what’s still stuck."
As work grows more complex, clarity suffers. Without reliable visibility, work risks stalling or repeating unnecessarily. Teams need tools, systems, and shared language to ensure breakdowns don’t get lost in the shuffle and progress remains meaningful.
Why do changing priorities and scope creep derail plans?
Frequent priority changes and scope creep disrupt planning discipline. They often signal deeper issues: vague goals, shifting leadership expectations, or unclear ownership. One product leader summed it up:
"Priorities used to switch constantly - sometimes halfway through a project, we’d have 30% done and then get pulled into something else. That context-switching really hurts. It demoralises engineers who were already deep into a feature. We had to raise it in a full engineering and product retrospective just to get some stability."
Another shared the toll it takes on delivery teams:
"We often found ourselves mid-quarter pivoting to newly emerging business needs, without fully aligning on what gets dropped. That lack of clarity meant engineers felt whiplash, and team goals kept shifting."
Without stable anchors in the form of clear goals and boundaries, even well-planned work can unravel. Work, then, expands to fill the available sprint, regardless of long-term impact, which brings us to the next challenge.
What stops teams from aligning strategy to daily work?
Teams need clear goals. But clarity breaks down when strategic objectives are too broad or when every team interprets them differently. A senior product manager explained:
"Prioritisation is only as good as your strategy, and ours wasn’t clear. The business goal was just ‘grow revenue,’ but what does that mean? Acquisition? Retention? Everyone wrote their own product objectives. It became a bit of a free-for-all. When goals are vague, it’s hard to prioritise work that ladders up to anything concrete."
Another added:
"We all set objectives tied to broad company goals, but when those goals lack precision, our objectives become misaligned, making prioritisation difficult and often inconsistent."
Without alignment between leadership priorities and team-level execution, valuable work can feel directionless. Objectives become outputs rather than outcomes.
What holds back meaningful retrospectives?
Retrospectives are intended to surface learning. But without consistent follow-through, they risk becoming routine. One Agile Coach shared how to keep them practical:
"We’ve tried tools where you just send a link and everyone rates how hard it was to get something done. But too often, it ends up with one person speaking and everyone else just agreeing. We’re trying to avoid the loudest voice dominating the retro. It’s still a challenge to get real, reflective conversations."
Another shared the risk of retro fatigue:
"To track action items consistently isn't easy... I have to toggle down and look at each one, which can make things cumbersome when ensuring certain behaviours have stuck...Effective retrospectives should surface recurring issues, not just review the recent past. Discussing ongoing challenges helps teams proactively tackle problems and move forward."
The barrier is rarely the ceremony - it’s the follow-up. Teams need lightweight ways to track retro actions, validate changes, and revisit unresolved pain points.
Where to focus
Improving collaboration means addressing the systems and habits that hold teams back:
- Keep strategic conversations active, not just at quarterly planning.
 - Visualise and track cross-team dependencies clearly.
 - Protect capacity for both roadmap work and operational stability.
 - Break work into testable, clearly defined pieces.
 - Reinforce the connection between business goals and delivery priorities.
 - Make retrospective actions visible and measurable.
 
The teams we speak to aren’t struggling because they lack process. They’re navigating complexity. The opportunity lies in simplifying where it matters and supporting teams with the clarity to make progress, together.
The first step is recognising these patterns and giving them language. When teams can see and name the problem, they’re already on the path to solving it.
How Easy Agile can help
Whether you're dealing with blurred dependencies, vague objectives or sprint volatility, Easy Agile offers three purpose-built solutions to help teams stay aligned:
- Easy Agile Programs brings structure and visibility to cross-team planning in Jira. Perfect for managing dependencies and long-range planning across multiple teams and projects.
 - Easy Agile Roadmaps gives every team a simple, shared timeline view, so they can prioritise and sequence work with strategic context.
 - Easy Agile TeamRhythm makes sprint planning, story mapping, and retrospectives more engaging and purposeful, turning agile ceremonies into actionable, team-owned progress.
 
 
- Agile Best Practice
The Problem with Agile Estimation
The seventh principle of the Manifesto for Agile Software Development is:
Working software is the primary measure of progress.
Not story points, not velocity, not estimates: working software.Jason Godesky, Better Programming
Estimation is a common challenge for agile software development teams. The anticipated size and complexity of a task is anything but objective; what is simple for one person may not be for another. Story points have become the go-to measure to estimate the effort involved in completing a task, and are often used to gauge performance. But is there real value in that, and what are the risks of relying too heavily on velocity as a guide?
Agile estimation
As humans, we are generally terrible at accurately measuring big things in units like time, distance, or in this case, complexity. However, we are great at making relative comparisons - we can tell if something is bigger, smaller, or the same size as something else. This is where story points come in. Story points are a way to estimate relative effort for a task. They are not objective and can fluctuate depending on the team's experience and shared reference points. However, the longer a team works together, the more effective they become at relative sizing.
The teams that I coach have all experienced challenges with user story estimation. The historical data tells us that once a story exceeds 5 story-points, the variability in delivery expands. Typically, the more the estimate exceeds 5 points, the more the delivery varies from the estimate.
Robin D Bailey, Agile Coach, GoSourcing
Scale of reference
While story points are useful as an abstraction for planning and estimating, they should not be over-analyzed. In a newly formed team, story points are likely to fluctuate significantly, but there can be more confidence in the reliability of estimations in a long-running team who have completed many releases together. Two different teams, however, will have different scales of reference.
At a company level, the main value I used to seek with story points was to understand any systemic problems. For example, back when Atlassian released to Server quarterly, the sprints before a release would blow out and fail to meet the usual level of story point completion. The root cause turned out to be a massive spike in critical bugs uncovered by quality blitz testing. By performing better testing earlier and more regularly we spread the load and also helped to de-risk the releases. It sounds simple looking back but it was new knowledge for our teams at the time that needed to be uncovered.
Mat Lawrence, COO, Easy Agile
Even with well-established teams, velocity can be affected by factors like heightened complexity with dependencies scheduled together, or even just the average number of story points per ticket. If a team has scheduled a lot of low-complexity tickets, their process might not handle the throughput required. Alternatively having fewer high-complexity tickets could drastically increase the effort required by other team members to review the work. Either situation could affect velocity, but both represent bottlenecks.
Any measured change in velocity could be due to a number of other factors, like capacity shifting through changes in headcount with team members being absent due to illness or planned leave. The reality is that the environment is rarely sterile and controlled.
Relative velocity
Many organizations may feel tempted to report on story points, and velocity reports are readily available in Jira. Still, they should be viewed with caution if they’re being used in a ‘team of teams’ context such as across an Agile Release Train. The different scales of reference across teams can make story points meaningless; what one team considers to be a 8-point task may be a 3-point task for another.
To many managers, the existence of an estimate implies the existence of an “actual”, and means that you should compare estimates to actuals, and make sure that estimates and actuals match up. When they don’t, that means people should learn to estimate better.
So if the existence of an estimate causes management to take their eye off the ball of value and instead focus on improving estimates, it takes attention from the central purpose, which is to deliver real value quickly.Ron Jefferies
Co-Author of the Manifesto for Agile Software Development
Story Points RevisitedSeeking value
However, story points are still a valuable tool when used appropriately. Reporting story points to the team using them and providing insights into their unique trends could help them gain more self-awareness and avoid common pitfalls. Teams who are seeking to improve how they’re working may wish to monitor their velocity over time as they implement new strategies.
Certainly, teams working together over an extended period will come to a shared understanding of what a 3 story point task feels like to them. And there is value in the discussion and exploration that is needed to get to that point of shared understanding. The case for 8 story points as opposed to 3 may reveal a complexity that had not been considered, or it may reveal a new perspective that helps the work be broken down more effectively. It could also question whether the work is worth pursuing at all, and highlight that a new approach is needed.
The value of story points for me (as a Developer and a Founder) is the conversations where the issue is discussed by people with diverse perspectives. Velocity is only relatively accurate in long-run teams with high retention.
Dave Elkan, Co-CEO, Easy Agile
At a company level, story points can be used to understand systemic problems by monitoring trends over time. While this reporting might not provide an objective measure, it can provide insights into progress across an Agile Release Train. However, using story point completion as a measure of individual or team performance should be viewed with considerable caution.
Story points are a useful estimation tool for comparing relative effort, but they depend on shared points of reference, and different teams will have different scales. Even established teams may notice velocity changes over time. For this reason, and while velocity reporting can provide insights into the team's progress, it must be remembered that story points were designed for an estimation of effort, rather than a measure. And at the end of the day, we’re in the business of producing great software, not great estimates.
Looking to focus your team on improvement? Easy Agile TeamRhythm helps you turn insights into action with team retrospectives linked to your agile board in Jira, to improve your ways of working and make your next release better than the last. Turn an action item into a Jira issue in just a few clicks, then schedule the work on the user story map to ensure your ideas aren’t lost at the end of the retrospective.
Many thanks to Satvik Sharma, John Folder, Mat Lawrence, Dave Elkan, Henri Seymour, and Robin D Bailey for contributing their expertise and experience to this article.
 
- Workflow
How to use story points for agile estimation
Story points can be a little confusing and are often misunderstood. Story points are an important part of user story mapping, and many agile teams use them when planning their work. But they aren't as simple as adding numbers to tasks or estimating how long a job will take.
Even if you’ve been using story points for a while, you’ll find that different teams and organizations will use them differently.
So, let’s define story points, discuss why they’re so useful for agile teams, and talk about some of the different ways teams implement them in story mapping and sprint planning.
What are user story points?
Story points are a useful unit of measurement in agile, and an important part of the user story mapping process. You assign a number to each user story to estimate the total effort required to bring a feature or function to life.
When to estimate story points
User stories can be estimated during user story mapping, backlog refinement, or during sprint planning.
Once a user story has been defined, mapped to the backbone, and prioritized, it's time to estimate the story points. It is a good idea to work with your team to do this, as each team member plays a different role in different stories, and knows the work involved in UX, design, development, testing, and launching. Collaborating on story point estimation will also help you spot dependencies early.It is best to assign story points to each user story before you sequence them into releases or sprints. This allows you to assess the complexity, effort, and uncertainty of each user story in comparison to others on their backlog, and to make informed decisions about the work you decide to commit to each sprint or release.
How to estimate user story points
When estimating story points, you're looking at the total effort involved in making that feature or functionality live so that it can deliver value to the customer. Your team will need to discuss questions like:
- How complex is the work?
 - How much work is needed?
 - What are the technical abilities of the team?
 - What are the risks?
 - What parts are we unsure about?
 - What do we need in place before we can start or finish?
 - What could go wrong?
 
Tip: If you're having trouble estimating a story or the scope of work is overwhelming, you might need to break your story down into smaller parts to make multiple user stories.
What is a story point worth?
This is where story points can get a little confusing, as story points don’t have a set universal value. You kind of have to figure out what they’re worth to you and your team (yep, real deep and meaningful stuff).
Here’s how it works:
- Each story is assigned a certain number of story points
 - Points will mean different things to different teams or organizations
 - 1 story point for your team might not equal the same amount of effort involved in 1 story point for another team
 - The amount of effort involved in 1 story point should remain stable for your team each sprint and it should remain stable from one story to another
 - 2 story points should equal double the effort compared to 1 story point
 - 3 story points should equal triple the effort compared to 1 story point… and so on
 
The number you assign doesn't matter - what matters is the ratio. The story points should help you demonstrate relative effort between each story and each sprint.
Estimating story points for the first time
Because story points are relative, you need to give yourself some baseline estimates for the first time you do story point estimation. This will give you a frame of reference for all future stories.
Start by choosing stories of several different sizes:
- One very small story
 - One medium sized story
 - One big story
 
...a bit like t-shirt sizes.
Then assign points to each of these baseline stories. Your smallest story might be 1. If your medium story requires 3 times more effort, then it should be 3. If your big story requires 10 times the effort, it should be 10. These numbers will depend on the type of stories your team normally works on, so your baseline story numbers might look different to these.
The important thing is that you’ll be able to use these baseline stories to estimate all your future stories by comparing the relative amount of effort involved.
Over time, you and your team will find estimating user stories becomes easier as your shared understanding of the work develops. This is where story points become most valuable, helping your team align expectations and plan more effectively.
Make estimation easier
An app for Jira like Easy Agile TeamRhythm makes it easy to see team commitment for each sprint or version, with estimate totals on each swimlane.
Using the Fibonacci sequence for story point estimation
Some teams use the Fibonacci sequence (1, 2, 3, 5, 8, 13, 21, 34, 55, 89, etc.) for their story point estimates, rather than staying linear or allowing teams to use any number (1, 2, 3, 4, 5, 6, 7, etc.).
This has its benefits. For example, if you're looking at a story and trying to estimate whether it's a 5, 8, or 13, it's much quicker and easier to come up with an answer than trying to land on the right number between, say, 4-15. You'll likely reach a consensus much more quickly.
This also means you won't be able to average the team's story points to finalize the estimation. Instead, you'll need to discuss the work and decide on the best estimate from a limited set of options.
But it does limit your options - if you have a story that’s more effort than 34, but less than 55, your estimate might be less accurate.
Using story points to estimate velocity
After some time working together most teams will have a good idea about how much effort is involved in each story point.
Of course, timing isn't exact - there's a bell curve, and story points are designed to be an estimate of effort, not time.
But story points (and knowing their approximate timing) can be useful when it comes to figuring out how much your team can get done each sprint.
You should be able to estimate about as many story points your team can manage during a two-week sprint, or whatever timeframe you’re working to.
For example, if your team can usually get through 3 story points per day, this might add up to 30 story points across a two-week sprint. This is your velocity.Velocity is useful for user story mapping and sprint planning. When mapping your user stories to sprints or versions, you can check the total story points and make sure it matches up with your velocity so you’re not over- or under-committed.
As you can see there are a few different methods for estimating work. The best advice is to be conservative and not overload the team.
Over time, your estimations should become more accurate.Using Story Points in Scrum, Kanban, and Extreme Programming
Story points are central to estimation and planning processes in many agile methodologies. Scrum and Extreme Programming (XP) rely heavily on story points to gauge the effort and complexity of user stories.
Scrum teams use story points during sprint planning to decide which tasks to include in the upcoming sprint, encouraging discussion that leads to shared context and understanding of the work.
Extreme Programming on the other hand, uses story points to assess the size of features, enabling teams to prioritize and allocate resources effectively. Teams using Kanban can benefit from story points by using them to set work-in-progress limits and optimize the flow of tasks across the board.
While the specific practices may differ, story points can help encourage team collaboration and a more predictable flow of work.
 
- Workflow
Agile Estimation Techniques: A Deep Dive Into T-Shirt Sizing
TL;DR: What T‑shirt sizing is, where it shines, how to run it with a real team, how it relates to story points, and how to avoid common traps.
A quick scene
Friday afternoon. You’ve inherited a backlog that sprawls for metres. Someone asks, “Roughly how long to ship the payments revamp?” Your team glances at the ceiling. You don’t need a perfect answer, you need a safe first pass that helps you plan sensibly. That’s where T‑shirt sizing earns its keep.
What is T‑shirt sizing in agile?
T‑shirt sizing is a lightweight way to estimate relative effort using XS, S, M, L, XL. It’s great for roadmaps, release planning, and early discovery, moments when detail is thin and the goal is direction, not exact dates.
Think of it as a sketch: enough shape to discuss options and make trade‑offs. As work moves closer to delivery, translate sizes into more precise estimates (for most teams, that’s story points).
New to story points or need a refresher? Read How to use story points for agile estimation and 10 reasons why you should use story points.
When to use T‑shirt sizes vs story points
Use T‑shirt sizes when:
- You’re scanning a large backlog to spot big items and cut noise
 - You’re sequencing epics on a roadmap or release plan
 - You’re aligning many teams for a Program Increment and need a first pass on effort
 
Switch to story points when:
- You’re shaping a commitment for a sprint or release
 - The team understands the work well enough to discuss risk, complexity, and unknowns
 
Simple rule of thumb - story point estimates are best for sprint planning. Affinity mapping, bucket systems, dot planning, and T-shirt sizing are better for roadmap and release planning.
How to run a T‑shirt sizing session (two practical patterns)
The main thing to keep in mind is you don’t need ceremony to get this right. What you need is speed and shared understanding.
1) Small set of items (do this in 20–30 minutes)
- Pick the work: 10–20 epics or features you want to compare.
 - Calibrate quickly: Agree on one example for S, M, L from your history.
 - Silent first pass: Each person suggests a size. Keep it to 30 seconds per item.
 - Discuss only the outliers: If your spread is XS to XL, talk. If it’s S/M, move on.
 - Capture the decision: Write the size on the card/issue and one sentence on why (risk, dependency, unknown). Future‑you will thank you.
 
2) Huge backlog (affinity + buckets)
- Affinity wall: Lay items left‑to‑right from smaller to larger.
 - Light buckets: Draw soft bands for XS/S/M/L/XL and nudge items into them.
 - One pass of challenges: People can move cards they strongly disagree with, but they must explain what information would change the estimate.
 
If you prefer a card‑based approach, swap in Planning Poker and use T‑shirt cards instead of numbers.
Here's an example of how T-shirt sizing would play out at a fashion retailer (we know, it's a bit on the nose). The team had a quarter‑long goal to reduce checkout drop‑off. In their first hour, they T‑shirt sized five ideas:
- New payment provider (XL) - Big integration, contract, risk
 - Guest checkout (M) - Some UX and auth changes
 - Auto‑fill postcode (S) - Low risk, measurable uplift
 - Order status emails (M) - Copy, events, templates
 - Retry logic on payments (L) - Engineering heavy, few dependencies
 
They sequenced S → M → L and left the XL until discovery removed the scariest unknowns. Two sprints later, they pointed the M/L items and committed. The XL became a spike with clear questions.
Where sizing goes sideways and how to recover
Converting sizes to points then leaving them untouched
Why it bites: People treat the conversion as a promise, plans harden, trust erodes when reality changes.
Try this: If you convert for prioritisation, mark those items as provisional, replace the size with true points during refinement, and keep a short note on what changed. For more on timing and trade offs, see 5 agile estimation tips.
Treating sizes as dates
Why it bites: A neat row of S and M turns into calendar commitments, and the team inherits a deadline they never made.
Try this: Share ranges based on throughput, update as you learn, and keep the conversation focused on outcomes.
One scale across many teams
Why it bites: S in one team is M in another, cross team comparisons become arguments not insight.
Try this: Keep scales local, and during PI Planning compare sizes only to surface risk and dependencies. Use a shared program board instead of chasing numeric parity.
Endless debate on edge cases
Why it bites: The time you spend arguing dwarfs the cost of being slightly wrong.
Try this: Timebox each item, discuss only the outliers, capture the uncertainty in one sentence, and move on. If a decision is still sticky, schedule a small spike with a clear question.
Skipping calibration examples
Why it bites: What counted as M last quarter slowly drifts, new joiners anchor on guesses.
Try this: Keep a living set of examples for S, M, and L in Jira, refresh them when your tech or team changes, and link those issues in your session notes.
Loud voices steer the room
Why it bites: Anchoring replaces thinking, quieter people disengage.
Try this: Start with a silent first pass, reveal together, then invite two or three different voices to speak before the most senior person. A little psychological safety goes a long way.
Jumping from XL epics to sprint commitments
Why it bites: The team commits to fog, you get churn and rework.
Try this: Slice the work first, use story mapping to find thinner slices, and refine before you point.
Mixing size and value
Why it bites: Small items with real impact wait behind large but low value work, momentum stalls.
Try this: Keep a separate value signal, a one line impact hypothesis is enough, then weigh size against value when you sequence. The planning guide above has a simple pattern you can copy.
No breadcrumb of why you chose a size
Why it bites: You cannot learn from your estimates later and the next session restarts from scratch.
Try this: add one sentence on risk, dependency or unknown to each decision, then check a sample in your next retro. Use action-driven retrospectives to close the loop.
Recording sizes and keeping plans honest in Jira
- For shaping and tracking epics, keep sizes and notes on a shared board the team actually uses every day. A user story map gives context and helps when you later point stories. Easy Agile TeamRhythm supports mapping, lightweight estimation and retros all inside Jira.
 - When several teams are involved, use a program board to visualise objectives, dependencies and dates. Easy Agile Programs keeps this view in Jira so you can plan PI events without spreadsheets.
 - For public roadmaps, keep it simple and visual. Easy Agile Roadmaps helps you share a plan stakeholders actually read.
 
Regardless of the type of agile project you're working on or the estimation process you choose, the more you practice, the quicker your team will become master estimators. We recommend trying a couple of different methods to see which one feels most comfortable for your team.
FAQ (for searchers and skimmers)
- What’s the point of T‑shirt sizing if we’ll use story points later?
 
It lets you compare big pieces quickly so you can decide what to pursue, sequence sensibly, and avoid wasting refinement time on the wrong items.
- Can we convert sizes to story points?
 
You can, locally, for prioritisation, just mark them clearly and replace with real points before a sprint commitment. Don’t reuse another team’s scale.
- Stakeholders want dates. How do we answer without over‑promising?
 
Share ranges based on today’s sizes and the team’s recent throughput, and update as items shrink and you learn more. For a practical way to connect goals, value and delivery, see How to make plans that actually ship and We simplified our OKRs.
- How do we run T‑shirt sizing across many teams?
 
Keep it team‑local first. During PI planning, compare sizes only to surface risk and dependencies, not to rank teams. Use a program board to keep the conversation grounded. Start with our PI Planning guide.
 







