The Challenge
Several issues were compounding:
- Many employees wrote low-quality reflections that communicated feedback poorly
- There were few ways to learn what "good" looked like while writing
- Some managers introduced bias through inconsistent or unsupported language
- Writing reviews from scratch was time-consuming and cognitively heavy
- Clarity varied widely, even when underlying performance didn't
The result:
Performance conversations were shaped as much by writing ability and habits as by actual performance.
This created inconsistency, hidden bias, and unnecessary friction across the entire process.
The Solution
I designed the QCI Assistant — a structured, conversational system that helps employees and managers write stronger reflections before they submit them.
Instead of acting as writing automation tool, the assistant:
- checks inputs against HubSpot's performance quality criteria
- identifies gaps in clarity, evidence, and alignment
- guides users to improve their own responses
The goal wasn't to generate better writing for people.
It was to help them produce better thinking, expressed more clearly.
Demo Walkthrough
The video below walks through the manager version of the QCI Assistant end-to-end — from onboarding through draft, revision, coaching check, and consistency review.
How It Works
The assistant is embedded alongside the existing QCI workflow and mirrors the same structure, so it feels familiar and easy to adopt.
1. Guided or fast-track input
Users can either:
- answer structured reflection questions step-by-step, or
- paste in a draft they've already written
This supports both first-time writers and experienced users.
2. Review against quality criteria
The system evaluates responses against defined standards, such as:
- clarity and specificity
- use of evidence
- alignment with performance expectations
- completeness across questions
3. Targeted coaching
Instead of rewriting the response, the assistant:
- highlights gaps or inconsistencies
- provides a small number of focused suggestions
- asks follow-up questions to help the user improve their draft
Users revise the content themselves, reinforcing the standard over time.
4. Optional support for deeper reflection
Additional features extend the experience without making it heavier:
- coaching prompts to prepare for the performance conversation
- consistency checks across multiple employees (for managers)
- lightweight templates for ongoing performance notes
These are available when needed, but not required.
Key Design Decisions
This system sits close to real career outcomes, so the design had to prioritize clarity, fairness, and trust.
1. Teach standards through use
Most review guidance lives in documentation that people don't revisit.
Here, the standards are embedded directly into the experience. Users learn what "good" looks like by applying it in context, not reading about it.
2. Don't let AI override the user
It would be easy for the system to rewrite vague or biased input into something polished.
That would improve outputs, but hide the underlying issue.
Instead, the assistant surfaces weak or unclear thinking and asks the user to strengthen it. This keeps ownership with the user and builds skill over time.
3. Mirror the real workflow
The assistant follows the same structure as the QCI form and aligns to existing processes (including edge cases like underperformance).
This reduces friction and ensures outputs are directly usable without translation.
4. Use structure to reduce bias
Features like cross-employee consistency checks help managers catch unintended differences in tone or evaluation across similar performance.
Rather than trying to "solve bias," the system makes it more visible and correctable.
Tradeoffs
- Clarity over speed
A guided flow adds friction, but reduces ambiguity in a high-stakes context - Learning over automation
Avoiding auto-rewrites preserves skill development and fairness - Consistency over flexibility
Structured inputs improve comparability across employees - Support over authority
The system guides decisions without making them
Impact
- ~2,600 users in the first month
- Over 50% of QCI participants engaged
- Became a top-used internal AI tool during rollout
More importantly:
Partnering with People Analytics, we validated that the system raised the quality of weaker reflections without degrading strong ones. That outcome matters more than adoption.
Why This Compounds
The assistant is designed for repeated use across performance cycles.
Each time, users get feedback on:
- clarity of communication
- strength of evidence
- alignment with expectations
Over time, the goal is not reliance on the tool — it's internalization of the standard.
Better inputs lead to better conversations. Better conversations lead to better decisions.
What This Shows About My Work
This project reflects how I approach AI systems:
- Identifying gaps from real behavior and turning them into scalable solutions
- Embedding support directly into moments that matter
- Designing for both individual improvement and organizational consistency
- Building systems that strengthen judgment, not replace it
The QCI Assistant started as a simple intervention in one workflow.
It became a foundation for improving how performance is understood and communicated across the organization.