Most organizations will tell you coaching matters. In a recent study of 177 HR professionals by the HR Research Institute, 71% of those with leadership coaching said it’s a strategic priority. Leadership development has held the number-one spot on Gartner’s HR priorities list for three consecutive years. Budget is flowing. Executive buy-in exists. The intent is there.
And yet, when those same organizations were asked whether coaching has actually improved performance to a high degree, only 22% said yes. About one in five couldn’t even say whether it had any impact at all.
This isn’t a new category of problem. McKinsey has documented why leadership development programs fail for over a decade. MIT Sloan estimates that only about 10% of leadership development spending actually delivers results. But the HR.com data adds something those analyses don’t — it shows exactly where the system breaks down for coaching specifically, and what the organizations getting results actually do differently.
Get the 2026 AI coaching playbook to see how organizations are implementing AI coaching at scale.
Only 30% of organizations train leaders to coach, track whether it’s happening, or tie it to performance reviews
The research reveals that most organizations have the aspiration but not the scaffolding. Only 30% train leaders in how to actually coach. Only 35% link coaching to leadership performance reviews. Only 23% monitor and evaluate participation. And just 18% reward or recognize leaders for developing others.
Think about what that means in practice. An organization declares coaching a strategic priority, then asks leaders to coach without teaching them how, doesn’t connect coaching to how those leaders are evaluated, doesn’t track whether it’s happening, and doesn’t recognize the leaders who do it well. The coaching initiative becomes something leaders are expected to do on top of everything else — with no structure, no measurement, and no incentive.
When 58% of respondents say the biggest barrier to coaching is “not enough time,” that’s not a scheduling problem. It’s a prioritization signal. Leaders will make time for what the organization actually measures and rewards. When coaching isn’t connected to anything that counts, it gets crowded out — no matter how many executives say it matters in the all-hands meeting.
This is a system failure, not a motivation failure. And it explains why buying a coaching platform — no matter how good — won’t fix things if the infrastructure around it doesn’t exist. The tool can’t compensate for what the organization hasn’t built.
Four practices that separate the 22% seeing coaching results from everyone else
The HR.com research divided organizations into two groups — those reporting strong coaching results (higher performers) and those reporting weaker results (lower performers) — and compared their practices. The differences are stark, and they’re not about budget or headcount.
1. They train leaders to coach, deliberately and ongoing.
Higher-performing organizations are over three times more likely to say their leaders are well-trained in coaching (49% vs. 15%). Most organizations assume leaders know how to coach because they’re experienced managers. The data says otherwise — fewer than half of leaders are rated proficient in listening, instilling confidence, or practicing empathy, the very skills coaching requires. Helping managers give feedback that actually lands is one of the highest-leverage investments an organization can make, and most aren’t making it.
2. They measure outcomes, not just activity.
Higher performers measure leadership performance improvement at more than double the rate of lower performers (51% vs. 24%). They track career advancement trajectories (41% vs. 17%) and learning assessments (31% vs. 11%). Lower performers, meanwhile, are nearly three times more likely to say they don’t measure coaching at all (33% vs. 13%). The gap here isn’t sophistication — it’s whether anyone is asking “is this working?” in the first place. Measuring the real ROI of coaching requires tracking behavior change, not just participation.
3. They build coaching into how the organization already operates.
Higher performers are more than twice as likely to connect coaching to succession planning (39% vs. 17%) and to link it to performance reviews (46% vs. 28%). Coaching isn’t a standalone initiative — it’s woven into the systems leaders already interact with. This is the difference between coaching as a side project and coaching as organizational infrastructure.
4. They use technology because of it’s scalability
Higher-performing organizations are nearly twice as likely to use digital tools for coaching and over three times more likely to use in-session support tools (51% vs. 16%). Lower performers are three times more likely to use no technology at all (43% vs. 14%). When coaching depends entirely on one person making time in a packed calendar to have a conversation they haven’t been trained for, it doesn’t scale. Technology doesn’t replace the human element — it creates the infrastructure that makes coaching possible at scale, grounded in data about how people actually work together.
See How Cloverleaf’s AI Coach Works
The most popular way to measure coaching doesn’t predict whether it actually works
Across all organizations in the study, the most common way to measure coaching effectiveness is participant feedback (42%). That’s asking the person being coached whether they liked it — which research consistently shows has no significant relationship to whether they actually learned or changed behavior. Only 37% track leadership behavior change. Only 27% track leadership pipeline readiness. And a quarter don’t measure at all.
This creates a vicious cycle. Without meaningful measurement, coaching can’t prove its value. Without proving its value, coaching doesn’t get the organizational commitment it needs — the dedicated time, the performance review integration, the leader training. And without that commitment, coaching produces exactly the mediocre results that make it hard to justify. Twenty-two percent satisfaction on a 71% strategic priority isn’t a coaching problem. It’s a measurement and accountability problem.
Organizations seeing results with AI coaching are three times more likely to have built the systems around it first
Only 16% of organizations in the study use AI-driven development for coaching. Thirty-two percent use no technology at all. But the higher-performer data tells a different story: organizations seeing results are three times more likely to use AI to predict future development needs (46% vs. 15%) and nearly twice as likely to personalize development with AI (49% vs. 27%).
This doesn’t mean AI is the answer to the priority paradox. An AI coaching platform deployed into an organization that doesn’t train leaders, doesn’t measure outcomes, and doesn’t connect coaching to performance reviews will underperform just like everything else. But for organizations that have built the infrastructure — that train leaders, measure behavior, and treat coaching as a system — AI becomes the mechanism that makes it possible to reach every manager, not just the ones who happen to get paired with a good coach. It’s what moves coaching from a new manager’s first 90 days to their next 900.
See how your coaching program compares across nine research-backed benchmarks
This article draws on the headline findings from the 2026 Leadership Coaching and Mentoring Playbook, published by the HR Research Institute and sponsored by Cloverleaf. The full report includes detailed comparisons between higher- and lower-performing organizations across nine major findings, technology adoption data, competency breakdowns, and actionable takeaways for building the coaching infrastructure that actually produces results.