For Managers
Attributes and skills
Use the attribute library to tag observations, rate direct reports against competencies, and surface attribute coverage in summaries.
Attributes are the competency tags your org admin maintains in the attribute library. They are the structured vocabulary your team uses to describe what good looks like — communication, decision-making, technical depth, customer focus, and so on.
You use attributes in two main places: tagging observations, and (when enabled) rating your direct reports against the attribute library on a regular cycle.
How the attribute library works
The attribute library is curated by org admins. It typically includes:
- Core attributes — competencies expected of every person in the org (e.g., "Customer focus," "Collaboration").
- Role-family attributes — competencies specific to a role family (e.g., "Code quality" for engineers, "Pipeline management" for sales).
- Level-specific attributes — competencies that apply only at certain seniority levels (e.g., "Technical leadership" for senior engineers).
Each attribute has a name, a short description of what good looks like, and (often) a definition by level. The library is not designed to be edited by managers — it is the org's shared performance vocabulary, maintained centrally for consistency.
If you think an attribute is missing or poorly defined, send a request to your org admin. Editing the library is covered in the admin docs.
Tagging observations
The most common way you use attributes is by tagging observations as you write them. When you start typing in the Attributes field on the observation form, the autocomplete pulls from your org's library.
How many attributes per observation
A few rules of thumb:
- 1–3 attributes is the sweet spot. Most observations are about one underlying competency; a few span two or three.
- Don't exceed five. The form caps at five, but five is already too many in most cases. If you find yourself tagging five, the observation is probably trying to do too much — split it into two.
- Don't tag what is not demonstrated. If the observation does not show evidence of the attribute, do not tag it. "Leadership" should not be tagged on an observation about code quality.
Choosing the most specific attribute
If your library has both "Communication" and "Written communication," tag the more specific one when the evidence is specifically written. Use the parent ("Communication") only when the evidence spans multiple modes.
Common tagging mistakes
- Tagging your aspirations, not the evidence. It is tempting to tag "Strategic thinking" on an observation about a thoughtful project plan because you want the employee to be more strategic. Tag what the observation actually demonstrates.
- Tagging by category instead of behavior. "Engineering" is not an attribute — it's a role family. Tag the underlying behavior: code quality, system design, debugging, code review.
- Tagging the same attribute on every observation. If every observation is tagged "Execution," you are not really using the library. Mix in the more specific attributes that the library was built to capture.
Why tagging matters
Attribute tags drive several downstream features:
- Summary evidence — when you draft a summary, you can filter the evidence panel by attribute, making it easy to write the strengths and opportunities sections.
- Attribute coverage in summaries — both individual and team summaries surface which attributes the period's observations touched on, and which were left untouched.
- Manager attribute ratings — if your org has manager ratings enabled (see below), the rating UI shows the observations that support each rating.
- Trend reporting — the Insights page (when available in your plan) shows attribute trends over time.
Manager attribute ratings
Manager-rated attributes require the
managerAttributesfeature. If the feature is not enabled in your org, you will tag observations with attributes but will not be asked to rate direct reports against the library directly.
Some organizations want managers to formally rate each direct report against the attribute library on a regular cycle. The rating gives leadership a structured view of where each person sits across the competency framework, beyond what's captured in summaries.
When ratings happen
Your org admin defines the rating cycle — typically annually or semi-annually, often aligned with the formal summary cycle. When a cycle opens, you receive a notification and a rating task appears on your dashboard.
Doing a rating
To complete a rating cycle:
- Open the rating task from your dashboard.
- For each direct report, you see the attributes that apply to their role and level.
- For each attribute:
- Pick a rating level from the org's defined scale (commonly: Below expectations, Meets, Exceeds, Significantly exceeds).
- Optionally add a comment.
- Review the supporting observations the system surfaces — observations you have authored about this person, tagged with this attribute, in the rating period.
- Save your draft as you go; the form autosaves.
- Submit when complete.
Rating scale conventions
The exact scale and labels are configurable by your org admin. Common conventions:
| Rating | Meaning |
|---|---|
| Below expectations | The employee is not yet performing at the level expected for their role. |
| Meets | The employee is performing as expected. The default for most attributes for most people. |
| Exceeds | The employee is consistently performing above expectations. |
| Significantly exceeds | The employee is operating at the next level on this attribute. |
Most ratings should be Meets — that is what the scale is calibrated to. If most of your ratings are "Exceeds," you are either calibrating differently from the rest of the org, or your team genuinely is exceptional. Either is worth a conversation with your manager.
Rating from evidence
The rating UI surfaces every observation you have authored that is tagged with the attribute being rated. Use those observations as your primary evidence:
- No observations on this attribute? That's a coverage gap. Either pick Meets with a comment that you have not been observing for it, or skip the rating if your org allows. Don't rate from impression alone.
- Mostly strengths on this attribute? A higher rating is justified.
- Mostly opportunities? A lower rating is justified, but only if the opportunities are persistent. A single missed delivery does not justify Below expectations.
- Mixed strengths and opportunities? Most often, this is Meets. The mix is what most performance looks like.
Calibration
Many organizations run a calibration session after ratings are submitted, where managers compare ratings to ensure consistency across teams. Calibration is run by your org admin or HR partner; you submit your ratings and join the calibration discussion when scheduled.
Attribute coverage in summaries
Both individual and team summaries surface attribute coverage as a section or panel.
In individual summaries
The summary shows:
- Attributes covered in observations during the period.
- Attribute split between strengths and opportunities.
- Attributes notably absent — competencies expected of the employee's role that have no observations in the period.
When drafting a summary (manually or with AI assistance), use this view to spot gaps before you finalize. An absent attribute is not necessarily a problem — but if the attribute is core to the role, the gap should be addressed in the recommended actions section.
In team summaries
The team summary aggregates coverage across your team. You see:
- Most-tagged attributes across the team.
- Coverage gaps — attributes with thin coverage across the team.
- Per-employee splits, so you can see who has been observed against which attributes.
A team-wide gap is a coaching opportunity for you. If "Stakeholder management" is sparsely tagged across your team, start observing for it deliberately.
Best practices
Rate based on evidence from observations
The strongest rating is the one you can defend from the observation history. If you have to reach for a single moment to justify a rating, the rating is probably aspirational. Wait for more evidence, or write the observations you wish you had been writing.
Revisit on a cycle
Attribute ratings are most useful when they are revisited regularly — at least once a year — and compared against the previous rating. The change in rating, and the evidence behind it, is the conversation. A rating done once and never revisited is just a snapshot.
Use the comment field
When you assign a rating, the optional comment is your evidence statement. "Meets — see Q2 observations on cross-team coordination" is more useful to your future self, your skip-level manager, and the employee than just "Meets."
Don't let ratings replace observations
If you only ever think about attributes during the rating cycle, the rating will be thin. Ongoing observation tagging is what makes the rating credible. The rating is the synthesis; observations are the evidence.
Talk through ratings with the employee
If your org shares ratings with employees (configured by your org admin), walk through the rating in a conversation. Don't just send the document. Ratings without context can feel arbitrary; ratings discussed alongside observations feel earned.
Privacy
- Attribute tags on observations are visible to the same people who can see the observation.
- Manager attribute ratings are visible to you, your skip-level manager, and (configurable per org) the employee being rated.
- Attribute coverage views in summaries are visible to the summary's audience.
Troubleshooting
An attribute I want to use is missing
The attribute library is maintained by your org admin. Send a request describing the competency, what good looks like, and where you would use it. Most admins are happy to add attributes that fill a real gap.
The rating cycle is not appearing
Rating cycles require the managerAttributes feature and an active cycle from your org admin. Confirm with your admin that a cycle is open, and check that you are signed in as the correct user.
My observations are not showing under the right attribute in ratings
The rating UI surfaces observations tagged with the attribute being rated, in the rating period. If observations are missing:
- Confirm the observations are tagged with the exact attribute (parent vs child attributes are tracked separately).
- Confirm the observation date falls within the rating period.
What to read next
- Creating observations — attribute tagging starts here.
- Writing performance summaries — attribute coverage shapes how you write summaries.
- Team summaries — attribute coverage rolls up across the team.