Skip to main content

Why L&D Leaders Cannot Accept Mediocre AI to Develop Their People

Picture of Matt Lievertz

Matt Lievertz

VP of Engineering at Cloverleaf

Table of Contents

Reading Time: 5 minutes

Artificial intelligence has lowered the cost of producing learning content to nearly zero. But while AI has made content easy to create, it has also created a much bigger risk for organizations: the illusion of progress without actual learning or real behavior change.

This problem is accelerating. The LinkedIn Workplace Learning Report 2024 shows that 77% of L&D professionals expect AI to dramatically shape content development. Yet in a striking contrast, the McKinsey 2025 AI in the Workplace report finds that only 1% of C-suite leaders believe their AI rollouts are mature.

That gap represents billions spent on AI tools that look innovative but fail to deliver what matters: performance improvement.

The core issue? Most AI in learning is built to produce more content faster, not help people apply what they learn or behave differently in real work. And when organizations deploy generic AI tools that produce generic learning, the outcome is predictable:

  • low adoption
  • low trust
  • low impact
  • high frustration

The stakes are not theoretical. Research from the Center for Engaged Learning shows how AI hallucinations can result in “hazardous outcomes” in educational settings. Even outside corporate learning, researchers are raising the alarm. Boston University’s EVAL Collaborative found that fewer than 10% of AI learning tools—across the entire education sector—have undergone independent validation. The problem is systemic: AI is being adopted faster than it is being proven effective.

If organizations accept low-quality AI, they accept low-quality learning—and ultimately, low-quality performance.

This article outlines a clearer path: leaders must demand AI learning that is personalized, contextual, interactive, and grounded in behavioral science. And they must stop settling for AI that only scales content when what they need is AI that actually scales capability.

Get the free guide to close your leadership development gap and build the trust, collaboration, and skills your leaders need to thrive.

The Current AI Landscape: A Flood of Tools, A Drought of Impact

Why every learning vendor suddenly claims “AI-powered”

AI’s accessibility has led to an explosion of vendors offering automated learning solutions. The problem isn’t that these tools exist—it’s that leaders often struggle to distinguish between AI that looks impressive and AI that drives measurable change.

Most AI learning tools fall into five common categories:

1. Content Generators

They rapidly produce courses, scripts, or microlearning modules. Useful for speed—but often shallow.

  • Generic “starter” content
  • Often requires human rewriting
  • Lacks learner- or team-specific context

No surprise: companies report up to 60% of AI-generated learning content still requires substantial revision.

2. Recommendation Engines

These tools suggest courses based on role, skill tags, or past activity. On the surface, this feels personalized. In reality, it rarely is.

Research on personalized and adaptive learning shows that effective personalization requires cognitive, behavioral, and contextual adaptation—not merely matching people to generic content.

3. Auto-Curation Systems

They pull content from libraries or the open web. This increases volume—not relevance. Without quality controls, curation leads to:

  • bloated libraries
  • inconsistent quality
  • decision fatigue

4. AI Quiz Builders & Assessments

These generate questions or quick checks for understanding. The issue? They often fail to align with real work demands. The ETS Responsible AI Framework underscores how most AI assessments fall short of required validity standards.

5. Chat Tutors / On-Demand Assistants

These tools answer learner questions or summarize concepts. But as Faculty Focus research highlights, AI hallucinations and generic responses still undermine trust.

See Cloverleaf’s AI Coaching in Action

Why Most AI Learning Fails: Content ≠ Capability

A pivotal finding from the World Journal of Advanced Research and Reviews makes this clear:

Most AI in learning optimizes for content production—not behavior change.

The result is a widening “quality divide”:

Content-Focused AI

  • Speeds up creation
  • Produces learning assets
  • Measures completions
  • Encourages passive consumption
  • Results: low retention, low adoption, low impact

Research shows learners retain only 20% of information from passive formats.

Behavior-Focused AI

  • Helps people apply new skills
  • Connects learning to real work
  • Reinforces habits over time
  • Measures behavioral outcomes
  • Results: improved performance, stronger relationships, better teams

The difference is dramatic. PNAS research demonstrates that AI can directly shape behavior—but only when it engages with people meaningfully.

The Three Non-Negotiables of Effective AI Learning

Leaders who want more than check-the-box training must insist on AI that meets three criteria:

1. Personalization: Grounded in Behavioral Science, Not Job Titles

Most “personalized” AI learning is anything but. True personalization requires understanding how individual people think, communicate, and make decisions.

Validated behavioral assessment like DISC, Enneagram, or 16 Types—reveal cognitive patterns and work-style tendencies generic AI cannot infer.

A study in ScienceDirect (2025) shows AI personalization yields significant performance gains (effect size 0.924) when it adjusts for cognitive abilities and prior knowledge.

Effective personalization must:

  • reflect real behavioral data
  • explain why a recommendation matters
  • adapt as a person grows
  • support team-specific dynamics

Ineffective personalization:

  • “Because you’re a manager…”
  • “Because you viewed 3 videos on feedback…”
  • Same content for everyone in a job family

When AI understands behavior—not just role—personalization becomes transformative.

2. Context: The Missing Ingredient in Almost All AI Learning

The number one reason learning doesn’t transfer?

It happens out of context.

The Learning Guild notes that learning fails when it’s separated from the moments where it’s applied. A 2025 systematic review reinforces that workplace e-learning rarely succeeds without contextual alignment.

Contextual AI considers:

  • the meeting you’re heading into
  • the personalities in the room
  • your team’s communication patterns
  • current priorities and tensions
  • the timing of performance cycles

This is what makes learning usable—not theoretical.

Context examples:

  • Before a 1:1: “This teammate values structure; clarify expectations early.”
  • Ahead of a presentation: “Your audience prefers details; lead with data, not story.”
  • During team conflict: “Your communication style may feel intense to high-S colleagues; slow your pace.”

This is what mediocre AI learning and development tools and coaches cannot do. It doesn’t know or understand the context.

3. Interactivity: What Actually Drives Behavior Change

A mountain of research—including active learning analysis and Transfr efficacy studies—shows that learning only sticks when people interact with it.

Passive AI = quick forgetting

Interactive AI = habit building

Reactive chatbots succeed only 15–25% of the time.

Proactive coaching systems succeed 75%+ of the time.

Because interaction drives:

  • reflection
  • intention
  • timing
  • reinforcement

And those four elements drive behavior change.

The Costly Sacrifice of Mediocre AI

Organizations assume mediocre AI is “good enough.” It isn’t. It’s expensive.

1. The Mediocrity Tax

  • wasted licenses
  • low adoption
  • inconsistent quality
  • rework and rewriting
  • user skepticism
  • stalled digital transformation

HBR’s Stop Tinkering with AI warns that small, tentative AI deployments “never reach the step that adds economic value.”

2. The Trust Erosion Problem

Once people encounter hallucinations or generic advice, they stop engaging. Research from ResearchGate shows trust recovery takes up to two years.

3. The Competitive Gap

Organizations using high-quality AI learning systems report:

  • 30–50% faster skill acquisition
  • 20–40% better team collaboration
  • higher retention

Mediocre AI leads nowhere. Quality AI compounds results.

What Quality AI Learning Looks Like (And Why Cloverleaf Meets the Standard)

Most AI learning tools cannot meet the three standards above for a simple reason: they lack foundational data about how people behave and work together.

Cloverleaf takes a fundamentally different approach.

1. Assessment-Backed Personalization (the science foundation)

Cloverleaf’s AI Coach is built on validated assessments giving it behavioral insight generic AI cannot mimic.

This enables:

  • tailored guidance for each personality
  • team-specific coaching
  • insights that explain why an approach works
  • adaptive updates as behavior changes

2. Contextual Intelligence Across the Workday

Cloverleaf connects with:

  • calendar systems
  • HRIS data
  • communication platforms (Slack, Teams, email)
  • team structures

It delivers coaching:

  • at the moment of real work
  • for the specific people involved
  • based on real team dynamics
  • in normal workflows

3. Proactive, Not Reactive Engagement

Cloverleaf does not wait for users to ask questions.

Rather it can:

  • anticipate coaching needs
  • deliver micro-insights before meetings
  • reinforce strengths over time
  • adapt based on user response patterns

This is what drives sustained adoption (75%+) and measurable results:

  • 86% improvement in team effectiveness
  • 33% improvement in teamwork
  • 31% better communication

The problem with mediocre AI is that it produces content—endlessly, cheaply, and often generically. Cloverleaf does something different: it builds capability by coaching people in the moments where their behavior, decisions, and relationships actually change.

How Leaders Can Evaluate Their AI Learning Investments

A simple, fast audit using the “Quality Standards Matrix” can reveal whether your current AI tools will create capability—or waste.

1. Personalization

Does the AI understand behavior, not just role?

2. Context

Does it integrate with real work and real teams?

3. Interactivity

Does it drive reflection, timing, reinforcement?

4. Proactivity

Does it anticipate needs instead of waiting for prompts?

5. Measurement

If the system can’t show measurable improvement in how people communicate, collaborate, and make decisions, then it’s not building capability. It’s simply generating content.

The Choice Ahead: Mediocrity or Meaningful Change

AI is shaping the next decade of workplace learning, but whether it accelerates performance or amplifies mediocrity depends entirely on the standards leaders demand.

Mediocre AI makes learning cheaper.

Quality AI makes teams better.

The difference is enormous.

Leaders have a rare opportunity to build implement tools that truly transform how people work, collaborate, and grow. But only if they refuse to settle for AI mediocrity and choose to invest in solutions that meet the science-backed standards of personalization, context, and interactivity.

Picture of Matt Lievertz

Matt Lievertz

Matt Lievertz is the Vice President of Engineering at Cloverleaf, where he leads product and platform strategy, engineering operations, and AI innovation. With experience spanning startups, enterprise, and government, Matt is passionate about building high-performing teams and solving the right problems—especially when it drives meaningful impact for people and organizations. He believes great software starts with great communication and thrives at the intersection of thoughtful strategy and hands-on execution.