Skip to main content

The Feedback Was Right. The Framing Was Wrong.

Picture of Alex WIlson

Alex WIlson

SVP of Product

Table of Contents

Reading Time: 4 minutes

I manage 10 direct reports. We do quarterly feedback, bidirectional, which means I start by asking them what they’d like me to continue, start, stop, or do differently. Then we flip it.

I’ve run this cadence for a while. Before my last round, I was better prepared than usual. I’d been syncing Granola meeting transcripts and 1:1 notes into Claude, so I could pull themes across months of conversations, not just whatever I happened to remember from the past two weeks. I had the patterns. I knew what I needed to say to each person.

I had already said most of it before.

One in Three Feedback Conversations Makes Performance Worse, Not Better

That’s not a rhetorical point. A landmark meta-analysis by Kluger and DeNisi examined 607 studies and found that over one in three feedback interventions actually decreased performance after they were delivered. Not neutral. Worse. Their explanation: feedback becomes less effective, and sometimes actively counterproductive, the closer it gets to the person’s sense of self. When feedback touches something someone considers core to who they are, the brain stops processing it as information and starts processing it as threat.

When that happens, people don’t change. They cope. They dispute the feedback, reinterpret it favorably, lower their goals, or agree in the moment and move on. The feedback is accurate. It doesn’t matter.

I had been watching this play out with one of my direct reports.

Get the 2026 AI coaching playbook to see how organizations are implementing AI coaching at scale.

The Same Feedback Didn’t Land — Until Managers Can Change How They Frame It 

One of my direct reports is genuinely one of the most helpful people I work with. When someone asks if something is possible, they’ll say yes, enthusiastically, warmly, and then go on to explain everything they’re going to do and how. It comes from a real place.

But in a startup where context switches fast, that pattern creates noise. Someone asks a quick question and gets a five-minute answer. The feedback I needed to give was simple: just say yes and move on. Not every question needs a full response.

I’d said something like this before. They understood it, nodded, and seemed to take it in. It came up again anyway.

This time, I prepared differently.

I was using Cloverleaf’s MCP integration alongside my meeting notes, pulling together patterns from past 1:1s and layering in behavioral data from assessments into the same context. Not just what had been happening, but additional signals about how this person tends to operate and how feedback like this might land with them.

The output didn’t just give me talking points. It added guidance on how to frame the feedback for this specific person.

It surfaced the same theme, and then added more helpful nuance and insight:

This is the single most personality-driven behavior. This person is very people-centered in nature, and helpfulness feels like an identity to them — not just a habit. Be careful here. If they hear ‘stop being helpful,’ that will land as a rejection of who they are. Instead, frame it as how they channel their helpfulness.

See How Cloverleaf’s AI Coach Works

When Feedback Touches Identity, It Stops Being Processed as Information

I stopped when I read that. Because I realized what I had been doing, even without using those exact words, was telling someone to stop doing the thing that feels most like them. For someone whose helpfulness is core to their identity, that isn’t a coaching note. It’s an identity threat.

The research on this is clear. Studies on how people respond to identity-threatening feedback consistently show the same pattern: people cope rather than change. They dispute it, misremember it more favorably, or reduce their commitment to improving, none of which is visible in the moment. They nod, they move on, and nothing shifts. The feedback wasn’t wrong. The frame was.

The reframe the system suggested: “Your helpfulness is one of your superpowers. The change is about being strategically helpful, directing it where it can have the most impact, not diffusing it across every moment.”

Same observation. Completely different frame. Their response when I used it: “Yeah, that’s spot on.” And then the conversation actually opened, they had thoughts about specific situations, ideas for what strategically helpful would look like day-to-day. It became a real exchange instead of something they were getting through.

What Behavioral Data Does That Performance Data Can’t

Most of what gets written about AI and feedback is focused on improving the data collection side: surfacing patterns across performance reviews, reducing recency bias, generating first drafts of assessments. That’s genuinely useful. Gallup research shows that employees who receive frequent, specific feedback are nearly four times as likely to be engaged, and better preparation helps managers get there.

But performance data tells you what happened. It doesn’t tell you how to talk about it in a way this specific person can actually receive.

That’s a different problem. The information about the helpfulness pattern was solid. What I was missing was context on how that pattern connects to this person’s identity, and therefore how I needed to frame the conversation for them to actually hear it.

That’s what the assessment data surfaced. Not a profile to study before a review cycle, but a specific note in the preparation flow: here’s how this person will likely receive what you’re about to say. Before the conversation, not after.

Giving Effective Feedback Gets Harder the More People You Manage

I know my team. I spend real time with each person. But managing 10 people at a startup — across product, customers, recruiting, and everything else — means the nuanced detail of how each individual thinks doesn’t stay in active memory. Some of it slips. Some of it I never had clearly to begin with.

This isn’t unique to me. Research on continuous feedback finds that feedback quality, specifically how well it accounts for the individual, is one of the strongest predictors of whether it changes behavior. The bottleneck isn’t manager effort or intent. It’s the cognitive load of holding detailed individual context across many people simultaneously.

Cloverleaf’s insight doesn’t replace knowing your team. What it does is resurface the context that matters at the moment you need it, in a way that changes not just what you say but how you say it for this person.

One Data Point Can Entirely Change How People Give & Receive Feedback

The feedback I’d been trying to give for months finally landed. Not because I said something new, because I said it in a way this person could actually hear.

That’s the part that’s been missing from most of what I’ve seen in this space. Not better data collection or more frequent check-ins. The translation layer between what you know about someone’s performance and how to communicate it in a way that reaches them, that fits how they think, what they value, and what they’re most likely to act on.

When that behavioral context, the translation between performance and how to communicate it, is present, feedback stops being something people sit through and becomes something they understand, engage with, and actually change because of it.

Picture of Alex WIlson

Alex WIlson

Alex is the SVP of Product at Cloverleaf, where he leads the development of Cloverleaf's AI Coaching, a role that blends his passion for building great software, great teams, and innovative products. Alex has a Masters in Computer Science from Northwestern University and lives in Vermont.