Software teams reinvented how they work in the last fifteen years. Agile workflows, two-week sprints, daily standups, continuous deployment, retrospectives — every one of these moves was built around the same assumption: the work changes faster than any plan can capture, and the team has to inspect and adapt as it goes.
HR is still running the system the old plan was built for. Annual goals set in January, calibrated against work done in October. Quarterly reviews that lag the work by months. Performance forms designed by people who haven't shipped software in a decade.
The mismatch isn't theoretical. It produces specific pain that managers and engineers feel every cycle — goals that go stale before they're reviewed, feedback that arrives too late to act on, retrospectives at the team level that have no analog at the individual level. This post is about why agile teams need an agile performance system, what that actually looks like, and where it has to live to work. It builds on our argument that annual reviews are dying — but the broader claim is that the entire HR cadence model is wrong for agile work, not just the annual review at the end of it.
How agile changed what work looks like
The agile manifesto was published in 2001. Twenty-five years later, almost every software team in the world runs some version of it. The specific frameworks vary — Scrum, Kanban, SAFe, Shape Up, whatever the team has converged on — but the underlying moves are consistent.
Plans are short. The horizon a software team plans against is rarely longer than a quarter and often shorter. Two-week sprints are common; some teams ship multiple times a day. The question "what should we be working on?" gets reanswered every week or two, not every January.
Feedback loops are tight. Continuous integration tells the engineer within minutes whether their code broke something. Customer telemetry tells the team within days whether the feature is being used. A/B tests close in weeks, not quarters. The team's own retrospective, once a sprint, is the explicit ritual where last week's work becomes this week's adjustment.
Roles are fluid. The senior engineer who shipped the API last sprint is paired with the staff PM on a discovery problem this sprint. The new manager is doing code reviews while interviewing for two open roles and running 1:1s for six. Job descriptions written in January are inaccurate by April.
The whole system is designed around the assumption that what you knew three months ago is incomplete information. The plan adapts. The team adapts. The work adapts. Everything in the engineering and product organization runs at this cadence — except, in most companies, the system that evaluates how the people inside it are doing.
How performance management didn't change with it
Performance management at most mid-sized companies is still doing this:
Annual goals. A document is created in January. The goals it contains describe work the team thinks it will do over the next twelve months. Six of those goals are obsolete by April. Two of them describe projects that got cancelled. The remaining few are vague enough that they technically still apply.
Six-month or annual reviews. A meeting is scheduled. The manager opens the goal document for the first time since January, scrolls through twelve months of work, writes a paragraph about what happened, assigns a rating, and submits. The employee reads the paragraph once, doesn't recognize half of it, files it.
Calibration committees. A group of senior leaders sits in a room (or a Zoom) and ranks employees against each other across teams. The conversation happens once or twice a year, takes hours, and produces decisions about people most of the participants have never directly worked with. The output is justified after the fact rather than derived from observation.
Forms. A static template asks each employee to assess their own performance against goals set when the world looked different. The form is filled out under deadline pressure, the manager edits it, HR collects it, the employee never sees it again.
This is the system most agile teams operate under. The fact that engineering ships every two weeks and the performance system runs every six months is treated as an accident of cadence rather than a structural conflict. It's not. The two cadences are running on different operating principles, and the slower one is dragging the faster one back.
Where the mismatch shows up
The conflict between agile workflow and annual-cycle HR isn't abstract. It produces specific failures, every cycle, that managers and engineers can name from memory.
Stale goals. The single most common complaint about annual performance management is that the goals are no longer the work. By the time of the review, the manager and the employee are negotiating which of the original goals to give partial credit for, which to substitute, and which to silently drop. The actual work — the new project the engineer led, the production incident she helped resolve, the migration that consumed Q3 — has to be reverse-engineered into the goals it didn't originally contain.
Lagging feedback. A senior engineer has a rough quarter and doesn't hear about it until the review nine weeks later — by which time the rough patch is over, the work is shipped, and the feedback arrives stripped of any context that would let her act on it. The same dynamic compounds across a team. The pattern was the subject of our previous post on delayed feedback: the cost of delay isn't linear, and review cycles are the most extreme version of delay.
Mismatched signal. Agile teams generate enormous quantities of useful performance signal — pull request reviews, sprint retros, customer support tickets, A/B test outcomes, deployment metrics. Almost none of this signal makes it into the formal review document. The review is built from memory and narrative, not from the data the team has been generating in flow all year.
Disjointed retrospectives. Most teams do team-level retrospectives every sprint or two. Almost no teams do individual-level retrospectives at any cadence shorter than the formal review cycle. The team learns continuously. The individual learns once a year.
Unobservable scale. As teams grow past 10–15 people, the manager's ability to remember what each report did over the year drops sharply. By 25 reports, the review is essentially fiction. By 50, the org is running performance management on data the manager doesn't have.
These aren't edge cases. They're the dominant experience of performance management at agile companies, and they're caused by the cadence mismatch.
What continuous alignment actually looks like
The fix isn't a faster annual review. It's a different cadence entirely — one that matches the cadence of the work.
Continuous alignment means three things in practice.
Goals that move at the cadence of the work. If the team's planning horizon is a quarter, individual goals adjust quarterly. If the team is shipping daily, the relevant unit is the project, not the calendar. The goal isn't to hit January's plan; it's to be working on the right thing today.
Feedback that lands within days, not months. The signal that matters most to a person's development is the signal closest to the behavior. A pull request comment, a 1:1 observation, a peer note from a sprint retro — these are the units of feedback that actually shape future work. The annual review is, at best, a summary of what should have been said throughout the year.
Summaries that derive from observation, not memory. When it's time for a quarterly check-in, the summary should be a synthesis of observations already captured, not an act of remembering. The manager spent the quarter actually managing — capturing observations, having 1:1s, watching the work happen. The summary is the assembly of those inputs, not a rebuild from scratch at the end.
This is what we mean by agile performance management: a system that runs on the same cadence as the work it's trying to evaluate, captures signal as it's generated, and produces summaries by synthesis rather than by recall.
The technical part isn't hard. The cultural part is.
Why this lives in Slack and Teams, not in an HR portal
There's a second mismatch most companies haven't named yet: the tools where performance management happens, and the tools where work happens, are different tools.
Engineers don't open the HR portal. Product managers don't open the HR portal. Designers don't open the HR portal. The HR portal is opened on the day a review is due, by deadline, under duress. It's a destination for forms, not a workspace.
Work happens in Slack. Or Teams. Or in the IDE, the design tool, the CRM, the ticketing system. The signal that matters — the manager noticing something in a thread, the peer noting something in a code review, the report mentioning something in a retro — is generated in those tools, every day. Capturing it requires meeting people where they already are.
This is where the next-generation performance systems are landing. Performance Blocks runs the capture layer through Henry, our coaching agent — embedded directly in Slack, Microsoft Teams, the web app, and email. A manager who notices something in a Slack thread can capture it as a structured observation without leaving the channel. A peer who wants to note something in a code review can do the same. The summary at the end of the quarter is built from the signal that already lives in the tools the team uses every day, not from a new round of remembering.
The principle generalizes beyond Performance Blocks. Whatever performance system an agile team adopts, the test is the same: does it run in the tools the work runs in, or does it require a separate destination people only visit under deadline pressure? The answer determines whether the system actually captures signal or just collects it under duress once a year.
Frequently asked questions
What is agile performance management?
Agile performance management is a system that runs on the same cadence as the work it's trying to evaluate — quarterly or shorter — and captures signal as it's generated rather than reconstructing it at the end of an annual cycle. It uses observation-led capture, lightweight check-ins, and summaries built from real-time inputs, embedded in the tools the team already uses (Slack, Teams, the IDE) rather than separate HR forms.
Why don't traditional reviews work for agile teams?
Traditional reviews run on calendar cycles (annually or semi-annually) while agile teams run on work cycles (sprints, releases, projects) that complete in weeks. By the time a review happens, the goals it was assessing are stale, the projects have shipped or been cancelled, and the manager has to reconstruct months of work from memory. The cadences are running on different operating principles, and the slower one undermines the faster one.
How can performance feedback be embedded in Slack?
Modern performance platforms run capture and feedback flows directly inside Slack and Microsoft Teams, the tools where most work conversations already happen. A manager can capture a structured observation about a report from the same channel where the work is being discussed, prompt a peer for feedback after a code review, or trigger a 1:1 agenda item — all without opening a separate HR portal. The capture happens in flow rather than in batch.
How often should an agile team review individual performance?
A quarterly cadence is the most common modern baseline, with weekly or biweekly check-ins as a complement. The right cadence depends on the work cadence: teams shipping every sprint can run individual retrospectives every sprint or two; teams shipping less frequently can move to monthly. The principle is that performance review cycles should not lag the work cycle by more than one full unit — anything longer turns the review into reconstruction rather than reflection.
What metrics matter for evaluating an agile engineer?
The metrics that matter for an agile engineer aren't the metrics most performance systems capture. The strongest signals come from peer feedback in code reviews, contributions to team retros, ownership of production incidents, mentorship of junior engineers, and the quality of the trade-offs they make under uncertainty. None of these fit cleanly into a numerical rating. They show up in observations that have to be captured in flow and aggregated over time, not derived from a static form.
What we know — and what we're refining
If you manage an agile team and your performance system still runs on an annual or semi-annual cycle, the move this week is small: open whatever document you'll eventually use to write a review, and start a running file for each direct report — one note per 1:1, one observation per sprint retro, one peer comment captured from a code review. Ten minutes a week. By the time the next formal review arrives, you'll have a draft.
Agile performance management isn't a marketing phrase. It's the only structure we've seen actually work for teams running modern software workflows: continuous goals, in-flow capture, synthesis at quarter close. The pattern matches the rhythm of the work. We've built Performance Blocks around this premise — Henry sits inside Slack, Teams, and the manager's existing flow, captures observations as work happens, and turns the running file into a summary when it's time. The HR portal nobody opens isn't part of our model because we don't think it can be part of any model that actually works for agile teams.
The detail we're still refining is how short the cycle can run before the overhead of synthesis exceeds the value of the signal. Quarterly clearly works. Monthly works for some teams. Bi-weekly probably doesn't, but we'd want better data before calling it. If you've experimented with short performance cycles on a real team, we'd genuinely like to hear what you found.
The agile transformation in engineering and product took fifteen years to play out. The agile transformation in performance management is starting now. The teams that figure it out first will be running on a system designed for the work they actually do — not the work they were doing in 2010.
