Henry Agent

Henry analytics

The admin-only analytics dashboard for Henry — usage, adoption, top intents, friction points, and how to read the data without misinterpreting it.

Henry analytics is the admin-only dashboard that shows how Henry is being used across your organization. It exists so you can answer two practical questions: are people getting value out of Henry, and where is Henry letting them down? The answers shape how you roll out features, where you focus training, and what to bring up with us if something is broken.

This article covers where the dashboard lives, what it measures, the two main views (usage and friction), the time ranges and filters available, how to read the data without overreacting to noisy numbers, and the privacy boundaries that keep individual content out of analytics.

Plan availability: Henry Agent is available on the Agentic plan. Team plan customers can upgrade in Settings → Billing.

Where the dashboard lives

Henry analytics is reachable from Settings → AI → Analytics (admin role required). Non-admin users do not see the link in the navigation, and the route returns an unauthorized response if accessed directly.

The feature is gated by the henryAnalytics toggle in addition to the underlying Henry availability. If your organization has Henry enabled but the analytics feature has not been turned on, you will not see the dashboard. Account owners can enable it from the same settings area.

The dashboard is a single page with two main views (usage and friction), surface and time-range filters at the top, and an export option for pulling raw data into a spreadsheet for further analysis.

What the dashboard measures

Analytics aggregates Henry interactions into a small set of metrics. Everything is org-scoped; you do not see other organizations' data and they do not see yours.

The core metrics are:

  • Suggestions made — the number of distinct AI responses Henry has produced. A response in chat is one suggestion; a draft inside a guided flow is one suggestion.
  • Adoption rate — the percentage of suggestions that were acted on. For drafts, "acted on" means the draft was saved or used in a downstream record. For chat answers, "acted on" is harder to define; the dashboard uses heuristics like follow-up questions, copying the response, or continuing the conversation.
  • Top intents — the kinds of tasks users most commonly bring to Henry. Intents are coarse buckets like "draft observation," "summarize a person," "ask about a process," "prep for 1:1." The list shows ranking and trend.
  • Surfaces used — the share of activity across the five surfaces (web panel, Slack, Teams, Chrome extension, email).

Each metric is shown as a current value and a trend over the selected time range. Where the metric is countable, you also see the totals.

Usage view

The usage view answers "is Henry being used, by whom, and for what?"

Active users

The headline number is active Henry users — accounts that have interacted with Henry at least once in the selected time window. The dashboard breaks this down by:

  • Total active users across all surfaces.
  • By surface so you can see, for example, that 80% of Henry use is in the web panel and 20% is in Slack.
  • By role so you can see whether managers, employees, or admins are the dominant users.

The trend over time shows whether activity is growing, flat, or declining. A declining trend is not automatically bad — it might mean the people who needed Henry have already gotten value and moved on — but it is worth a look.

Suggestions and adoption

Below active users, the dashboard shows suggestions made and adoption rate.

  • Suggestions is a volume metric. It tells you how much Henry is doing.
  • Adoption is a quality metric. It tells you whether what Henry produces is actually useful.

Both should be considered together. High suggestions with low adoption means people are asking but not finding what they need — friction worth investigating. Low suggestions with high adoption means the people who do use Henry get value, but most of your org is not engaged — a rollout or training opportunity.

Top intents

Intents tell you what work Henry is mostly doing. The top-intents list typically looks like:

Intent Share Trend
Draft observation 32% up
Prep for 1:1 21% flat
Look up a person 14% up
Draft summary 11% up
Ask about a doc 9% up
Other 13% flat

This view is useful for two reasons:

  • It tells you which features of Henry to invest in. If "draft summary" is climbing, your knowledge base content for summaries matters more.
  • It tells you which features are not being used. If nobody is asking Henry for 360 feedback help, that may be a training gap rather than a product issue.

Surfaces

Surface breakdown shows the share of Henry activity across the web panel, Slack, Teams, Chrome extension, and email. Each surface's share is shown both in absolute terms (number of suggestions) and as a percentage of all Henry activity.

This view helps you understand how people prefer to interact with Henry. If Slack usage is dominant, you might prioritize Slack-specific training. If the Chrome extension has zero adoption, you might decide it is not worth the install effort and turn the surface off.

Friction view

The friction view answers "where is Henry letting people down, and what can we do about it?"

Abandonment

The most actionable friction signal is flow abandonment — guided flows that users start but cancel before completing. The dashboard shows:

  • Abandonment rate by flow. A high rate on a specific flow (e.g., 40% of start-summary flows are canceled) suggests the flow has a problem worth investigating.
  • Step where users abandon. Knowing that users typically cancel at the editor step versus the selection step changes what you would do about it.

A modest amount of abandonment is normal — sometimes people start a flow and realize they need more context first. But persistent high abandonment on one flow is a signal that something is off, whether it is the prompt, the inputs Henry has access to, or the user expectation.

Common follow-up questions

When users get a response from Henry and immediately ask a follow-up, that follow-up tells you what was missing in the first response. The dashboard surfaces common follow-up patterns:

  • Frequently asked clarifications ("what do you mean by X?")
  • Common requests for more detail ("can you make this longer / shorter / more specific?")
  • Common requests to redo ("try again," "different angle")

If a particular follow-up pattern is dominant for a particular intent, that points at a calibration issue. For example, if "draft observation" responses frequently get "more specific please" as a follow-up, your users are getting drafts that feel generic — possibly because the raw context provided is too thin, or the knowledge base is not surfacing enough domain detail.

Low-confidence intents

Some Henry responses are produced with low retrieval confidence — the question did not match anything strongly in the knowledge base or in your in-product data. The dashboard shows which intents most often produce low-confidence responses.

Low confidence is not the same as wrong. Henry can still produce a useful answer with low retrieval confidence, drawing on its general knowledge. But a high rate of low-confidence responses for a particular intent is usually a knowledge base gap. If "ask about leveling" has a high low-confidence rate, your leveling docs are probably missing or not indexed well.

Time ranges and filters

The dashboard supports a few common time ranges:

  • Last 7 days
  • Last 30 days
  • Last quarter
  • Custom range (pick start and end dates)

Trend lines respect the selected range. Switching from "Last 7 days" to "Last quarter" rescales every chart and recomputes every percentage.

You can also filter by:

  • Surface — restrict to one or more of the five surfaces.
  • Role — managers, employees, admins.
  • Intent — restrict to a specific intent or set of intents.

Filters apply to every metric on the page so you can ask focused questions like "what is the adoption rate for managers using the web panel for draft-observation flows in the last 30 days?" without leaving the dashboard.

Reading the data well

A few common mistakes to avoid when interpreting analytics.

Adoption is not a single number to maximize

Adoption rate is a useful metric but it is not a goal on its own. Some intents are inherently lower-adoption because the work is exploratory. Asking "what did I write about Marcus last quarter" is high-value even if you do not "act on" the answer in any traceable way — you read it, used it as context for your own thinking, and moved on. The dashboard's heuristics try to capture this kind of soft adoption but they will undercount it.

If you push for adoption rate as a KPI, you risk shaping behavior in unhelpful ways (people forcing themselves to save Henry drafts they do not actually want to keep, just to show usage). Use adoption as a directional signal, not a target.

Low usage is not always a problem

Henry's value is not tied to total interaction count. A team that uses Henry for the heavy lifts (summaries, calibration prep) and not for daily chatter may have lower total volume than a team using Henry for everything, but the heavy-lift team may be getting more value per interaction. Look at intents and outcomes alongside volume.

High volume on one surface is informational, not normative

If your org uses the web panel heavily and Slack barely, that is a fact about your team's workflow, not a problem to fix. The five surfaces exist so people can use Henry where they already work. Pushing Slack adoption when nobody wants it is wasted effort.

Compare to your own baseline

Adoption and usage benchmarks vary widely by company. Compare each metric to your own historical trend, not to an absolute target. A 35% adoption rate that grew from 20% is a healthy story. A flat 35% adoption rate that has not changed in six months is a different story.

Investigate spikes and dips

Sudden spikes in low-confidence responses, abandonment, or specific intents are usually triggered by something concrete: a new flow rolled out, a knowledge base doc was removed, a new team onboarded, or a feature changed. Tie spikes to specific changes when you can.

Privacy and what analytics does not show

Henry analytics is aggregate by design. It does not show:

  • The content of any individual user's chat with Henry.
  • The content of any individual draft Henry produced.
  • The text of any specific knowledge base citation in a specific response.
  • Per-user transcripts, prompt content, or attachment content.

The dashboard tells you that "draft observation" is the most common intent. It does not tell you which observations were drafted, who they were about, or what they said.

This is intentional. Analytics exists so admins can manage Henry as a feature; it does not exist as a surveillance tool to read what individual users are talking to Henry about. Per-user content stays scoped to that user (and the people normally permitted to see their performance management work).

If you need to investigate a specific issue and aggregate data is not enough, contact your account team rather than trying to derive it from analytics. There are deliberate boundaries here.

Exporting data

The dashboard supports an export of the metrics shown — usage volumes, adoption rates, intent breakdowns, surface shares, friction signals — for the selected time range and filters. The export is a CSV-style file you can drop into a spreadsheet or BI tool.

The export includes the same aggregate metrics shown in the UI. It does not include any per-user content for the same privacy reasons described above.

If you build internal reporting on top of these exports, treat the column structure as stable but expect occasional additions when new metrics ship. Avoid hard-coding column positions; key off column names.

When low adoption is actually fine

A specific case worth calling out, because it confuses people.

Some Henry interactions are genuinely zero-impact-by-design. A user opens Henry, asks "what did I write about Sara in Q1?", reads the answer, closes the panel, and goes back to writing their summary by hand. The analytics will show one suggestion and zero adoption — but the user got real value. Their summary is better because they remembered something they had written; they just did not save Henry's response anywhere traceable.

This pattern is common with managers who use Henry as a memory aid. It depresses the adoption rate without indicating a problem.

If your dashboard shows low adoption for "look up a person" intents in particular, do not panic. That is the use case where adoption-by-the-metric is hardest to capture. Look at follow-up patterns and abandonment instead — those are more reliable signals for that kind of work.

Next steps

© 2026 Performance Blocks. All rights reserved.