In late 2012, Donna Morris was in Delhi when she told a reporter Adobe was killing the annual performance review. The decision wasn't fully approved yet. She had been thinking about it for months, the question was on her mind, and she said it. By the next morning the comment was on the front page of the Economic Times. Within a year Adobe had built a replacement called Check-In and rolled it out to roughly 11,000 employees. Voluntary turnover dropped about 30 percent.
The reason Adobe walked away wasn't ideological. It was operational: the company calculated that its managers were spending tens of thousands of hours a year on a process that almost nobody — managers, employees, or HR — actually believed produced good decisions. Annual reviews were expensive theater.
What's striking is that fourteen years later, most companies still run them. The system Adobe walked away from is still the default at most mid-sized organizations. And the reasons it failed at Adobe in 2012 haven't gone away. They've gotten worse.
This is a post about why annual performance reviews are dying, what's replacing them, and why the replacements only became practical in the last few years.
How we got annual reviews in the first place
The annual review wasn't designed. It accumulated.
The earliest formal performance ratings in the U.S. came out of the military — the Army developed merit rating systems during the First World War to evaluate officers at scale. The systems migrated to civilian work after 1945, picked up by General Motors, IBM, and the federal government in the 1950s. Peter Drucker named the underlying philosophy "Management by Objectives" in his 1954 book The Practice of Management. By the 1980s, when Jack Welch institutionalized stack ranking at GE, the annual cadence had been the default for two generations.
The cadence — once a year — was never the point. It tracked something else: the budget cycle. Compensation decisions had to be made annually because compensation budgets were set annually. The performance review existed to feed the compensation process, and the compensation process ran on a calendar.
Everything else got bolted onto a process whose actual purpose was distributing a fixed pool of money once a year. The development conversation. The goal-setting ritual. The numerical rating. The calibration meeting. The 360. The self-assessment. The form.
That's the inheritance most companies still operate under. The original purpose has barely shifted. The work being managed has shifted enormously.
Why recency bias ruins annual reviews
Ask a manager to evaluate twelve months of someone's work and they will, in practice, evaluate the last six weeks.
This isn't a discipline problem. It's a memory problem. Daniel Kahneman and Amos Tversky published the foundational work on the availability heuristic in 1973: when we judge the frequency or weight of something, we use whatever examples come to mind first. For a manager writing a review on a Sunday night in December, the examples that come to mind first are the ones from October and November — not from February.
The peak-end rule, from Kahneman and Barbara Fredrickson's 1993 work, makes it worse. We remember experiences disproportionately by their peaks (the most intense moments) and their endings. A direct report who shipped quietly all year and then missed a deadline in November is remembered as a deadline-misser. A direct report who struggled all year and then closed a major project in November is remembered as a strong finisher.
Annual reviews don't just happen to be vulnerable to recency bias. They're shaped by it. The structure — twelve months of work, one writing session, one rating — is exactly the structure cognitive science predicts will produce the most distorted outputs.
The standard mitigation is "keep notes throughout the year." It works, when managers do it. Most don't. The friction of opening a document, finding the right person's section, and writing a clear note about what just happened is high enough that the practice falls off the calendar by week three. Even the managers who keep notes tend to capture only the noteworthy events — the failures and the unusual successes — not the steady week-to-week pattern that actually defines someone's contribution.
Modern work moves faster than the review cycle
In 2026, the average software project lifecycle at a growing company is six to ten weeks. The average B2B sales cycle is three to nine months. The average tenure of an early-career hire at a startup is under two years. None of these timeframes maps cleanly onto twelve months.
By the time a manager writes a review in December, half of what the report did that year is no longer running, no longer relevant, or owned by someone else. The product they shipped in March was sunset in October. The customer they saved in June churned in September anyway, for unrelated reasons. The skill they developed in August has been replaced by the new framework introduced in November.
Distributed and remote work compounds the problem. A co-located manager in 2010 had ambient visibility — they saw the report's work happen, heard the side conversations, observed the after-the-meeting cleanup. A remote manager in 2026 sees what's pinged into Slack, what shows up in pull requests, and what surfaces in 1:1s. The signal is sparser and more selective. Annual review writing depends on a year of memory to draw on. Remote management produces a year of fragments.
The result is that the annual review's central conceit — that you can summarize twelve months of contribution in a single document, written in a single sitting — has gone from awkward to fictional. The summary isn't a summary anymore. It's an interpretation of recent fragments, with older fragments confabulated to fill the gaps.
What's replacing the annual review
The shift away from annual reviews has been underway for over a decade.
Adobe killed theirs in 2012. Microsoft eliminated stack ranking in 2013, announced by HR chief Lisa Brummel in a company-wide memo. GE — the company that institutionalized stack ranking under Welch — phased it out in 2015. Deloitte published a widely-cited HBR piece the same year, "Reinventing Performance Management" by Marcus Buckingham and Ashley Goodall, describing how the firm had restructured its review process around weekly check-ins and what it called "performance snapshots." Accenture followed in 2016. Goldman Sachs eliminated numerical ratings for most employees in 2017.
The replacements are not uniform, but they share a few moves.
Decoupling. Companies that used to run one process to handle compensation, development, calibration, ratings, and feedback now run several. Compensation is decided in its own cycle, often annually but with looser ties to formal reviews. Development conversations happen quarterly or monthly. Calibration is a separate conversation that doesn't pretend to also be a coaching conversation.
Shorter cadence. Quarterly is the most common new cadence — long enough to evaluate something meaningful, short enough that recency bias has less to overwrite. Some teams have gone monthly, especially in fast-moving environments; the comparative data on whether monthly outperforms quarterly is thin and inconclusive. Weekly check-ins are common as a complement, not a replacement.
Observation-led capture. Instead of asking managers to remember a year of work in December, capture the work as it happens, throughout the year, in the natural flow of how managers already work. Some teams do this in shared docs. Some do it in dedicated channels. Some use purpose-built tools — Lattice, Culture Amp, 15Five, and Performance Blocks all sit somewhere on this spectrum, with different opinions on how much structure the capture should have and how much synthesis happens automatically versus by hand.
None of this is new as an idea. The model has been described in HBR, in McKinsey research, in industry reports for ten years. What's new is that the operational cost of running it dropped enough for smaller companies to adopt it without an HR engineering team.
Why AI made continuous feedback finally workable
The historical objection to continuous feedback wasn't ideological. It was overhead.
When Adobe killed the annual review in 2012, the engineering of the replacement took two years. They had to design new manager training, new conversation templates, new rating criteria, new compensation logic. The result worked — a 30 percent reduction in voluntary turnover is a real number, not a marketing one — but the implementation cost was significant and the model didn't easily port to smaller companies that didn't have Adobe's HR engineering capacity.
For most mid-sized companies in 2015, continuous feedback was a great idea that asked managers to do four times as much work as they were already failing to do. The math didn't work.
What's changed is that AI handles two parts of the workload that managers were never going to do consistently. The first is capture — converting a Slack message, a 1:1 note, or a passing observation into structured feedback that can be retrieved later. The second is synthesis — taking dozens of small observations across a quarter and producing a useful summary, faster than a manager could write one from scratch and more comprehensive than what a manager would remember on their own.
The implication isn't that AI writes the review. The conversation between a manager and their direct report is irreducibly human, and the people who think otherwise are usually the people who haven't had to give one. AI changes what arrives at the conversation. Instead of a manager scrambling on a Sunday night to remember twelve months of context, they walk in with three months of structured observations already organized. The conversation becomes higher quality because the inputs are.
Performance Blocks runs this layer through Henry, our AI assistant. Henry drafts observations from a manager's quick description of what happened and synthesizes them into summaries grounded in the underlying observations, objectives, and peer feedback. The structure is the same regardless of who does the work — customers on the Team plan capture and synthesize the same observations by hand.
This is the part of the shift that took the longest to become practical. Language models needed to be good enough to summarize observations without distorting them, fast enough to be useful in a manager's flow, and cheap enough to run on every team member every week. None of that was true in 2018. By 2024, it was. By 2026, it's table stakes for any continuous performance system that wants to scale below the enterprise tier.
What to do if your company still runs annual reviews
Most companies still run annual reviews. Most readers of this post manage people inside one of those companies. The shift the post describes won't reach you on its own timeline, and it won't reach you because HR adopted a new vendor.
The single most useful thing a manager can do this week, regardless of what their company's review system looks like, is to start a running file. One note per direct report. One observation per 1:1. Ten lines a week. By the time the annual review comes around in December, the file is the review — or at least the source material. The structure of the file matters less than its existence.
This is what observation-led performance management looks like in its lightest form: capture in flow, synthesize on demand. It works inside an annual review process. It works better outside one. The discipline matters more than the tool.
The companies still running annual reviews aren't running them because they think the system works. They're running them because the cost of switching is real and the political will to do it isn't there yet. The system is dying. It just isn't dead.
Frequently asked questions
Why are annual performance reviews bad?
Annual reviews ask managers to evaluate twelve months of work in a single sitting, which produces evaluations heavily biased toward the most recent six weeks. Cognitive research on availability bias and the peak-end rule shows this distortion is structural, not a discipline problem. The result is feedback that doesn't accurately reflect a year's contribution and arrives too late to influence behavior in the period being evaluated.
What is replacing annual performance reviews?
The most common replacement is a quarterly cadence with weekly or biweekly check-ins, decoupled from compensation decisions and supplemented by ongoing observation capture. Companies including Adobe, Microsoft, GE, Deloitte, Accenture, and Goldman Sachs made this shift between 2012 and 2017. The model has since become practical for smaller companies because AI tools handle the capture and synthesis overhead that previously required dedicated HR engineering.
Are annual performance reviews effective?
The research is consistent that annual reviews are poor at their two stated purposes — improving performance and developing people. They are reasonable at administering compensation, which is what they were originally designed for. Most companies that still run them recognize this gap and have either added supplementary processes or are in the early stages of transitioning to a different model.
How often should performance reviews happen?
Quarterly is the most common modern cadence, with weekly check-ins as a complement. Some teams use monthly cycles, especially in fast-moving environments, but the comparative data on monthly versus quarterly is thin and inconclusive. The cadence matters less than the discipline of capturing observations throughout the period rather than reconstructing them at the end.
What is continuous performance management?
Continuous performance management is the practice of capturing observations, giving feedback, and having development conversations on an ongoing cadence — typically weekly or biweekly check-ins plus quarterly reviews — rather than concentrating those activities into a single annual cycle. It usually decouples performance feedback from compensation decisions and uses lightweight tools to reduce the per-conversation overhead.
What we know — and what we're refining
If you manage people and your company still runs an annual review, the move this week is small: open a doc, list your direct reports, write one line under each name about the most useful thing they did this month. It will take fifteen minutes. Do it again next month.
A quarterly cadence with weekly check-ins is the right shape for most teams — long enough to evaluate something meaningful, short enough that recency bias has less to overwrite, frequent enough that the next conversation is never more than five weeks away. Adobe and Deloitte arrived at this pattern independently between 2012 and 2015, and it has held up against thirteen years of attempted improvements at companies from Microsoft to Goldman Sachs. We've built Performance Blocks around this exact shape — the observations are captured in flow, the summaries land at the end of each quarter, the conversations between quarters keep the file warm. It's the baseline we'd recommend to any manager designing their own system from scratch, and we think it's the right baseline for the next decade too.
The detail we're still refining is whether the same shape holds at scale — the data is thin on whether quarterly cadences with weekly check-ins survive the move from a team of seven to a team of seventy without modification. If you've run continuous performance practices on a larger team, we'd genuinely like to compare notes.
The system most companies still run wasn't designed for the work most companies now do. Adobe figured that out in 2012. The rest is catching up — and the shape they're catching up to is the one we've already built.
