Customer support onboarding at Wix used to take three months. Agents had to memorize complex product knowledge across multiple offerings—websites, e-commerce, bookings, blogs. By month three, they’d forgotten what they learned in month one. And the product changed so fast that memorized knowledge became outdated anyway.
Dr. Eli Bendet-Taicher, Head of Global Learning & Talent Development at Wix, and his team built an AI-supported knowledge discovery agent. Agents can now find relevant internal resources in seconds, get simplified explanations of complex procedures, and receive suggested responses that adapt to customer context.
Onboarding dropped from three months to one month. Training that remained focused on judgment, communication, and customer empathy—the human skills AI can’t replace. Technical knowledge became instantly accessible instead of memorized.
But not every AI implementation at Wix succeeded. Eli’s team also built personalized learning paths that employees didn’t use. Sarika Lamont, Chief People Officer at Vidyard, discovered AI tools that promised productivity created multitasking overload instead. Christina Parr, Global Talent & Organizational Design Leader, watched team members lose credibility by dumping uncustomized AI outputs into Slack.
What actually works when implementing AI in talent development—and what fails in ways that waste time, budget, and credibility? Three practitioners recently shared their real implementation stories: the wins, the disappointments, and what they’d do differently.
Get the 2026 AI coaching playbook for talent development to accelerate team performance.
Why personalized learning paths failed (and what employees actually needed)
Eli’s team built personalized learning paths on their LMS. The AI would analyze each employee’s role and history, then suggest what they should learn next. On paper, it sounded perfect.
Employees didn’t use it.
“People weren’t struggling with what to learn,” Eli said. “They were struggling with clarity, with time, with priority, with support and coaching.”
The AI solved a problem that didn’t exist. Employees don’t become suddenly engaged because a system suggests smarter course sequences. They become more effective when AI helps them solve problems faster in their actual workflow.
AI can’t redesign motivation. AI removes friction.
When AI creates more problems than it solves
Sarika Lamont, Chief People Officer at Vidyard, identified another failure pattern: AI tools that promise productivity but create multitasking overload.
“There’s a great HBR article that just came out on how AI doesn’t reduce work—it intensifies it,” Sarika said. “We’re multitasking even more. We’re code-switching and context-switching even more. I’ve got six different tasks going on in multiple different tools and my brain is fried.”
The excitement about what AI can do leads people to try everything at once. One of Vidyard’s software engineers joked that he’s working on becoming better at ADHD because he’s so good at focusing—he needs to improve his ability to switch between tasks constantly.
People need to slow down to speed up. But organizations don’t know how to create that space.
See How Cloverleaf’s AI Coach Works
The credibility problem with generative AI
Christina Parr, Global Talent & Organizational Design Leader, shared what happens when teams rely on generative AI without customization.
“We had a new team member who would see the team talking on Slack or Teams about needing a tool for something. This person would go out to AI, ask for exactly what we were asking for, and just dump the document into Slack,” Christina said. “It was not effective. It took away from that person’s credibility because it wasn’t at all customized to what the team actually needed.”
Generative AI can draft quickly. Human judgment determines whether the output is actually useful.
For more on how talent development infrastructure changes when AI handles friction, see why 2026 is the year talent development becomes business infrastructure.
What to measure instead of logins: Time saved, promotions, and manager effectiveness
Login metrics don’t prove AI delivers value. When development happens in the flow of work instead of behind a login wall, you need different measurements.
Eli outlined three categories for measuring AI impact:
Operational efficiency: The straightforward calculation
Time saved creating materials. Faster access to knowledge. Shorter onboarding cycles.
“For us, reducing onboarding from three months to one month—that’s ROI right there,” Eli said. “Two months of training cut off. You know how much money that costs.”
Calculate reduced hours multiplied by cost per hour. This is the easiest layer to measure and defend.
Performance outcomes: Business KPIs that improve
Time to productivity. Error rates. Customer satisfaction. Manager effectiveness.
When AI embeds in workflows, business KPIs attached to the behaviors you’re trying to change should improve. Wix tracks customer support metrics—response time, resolution time, customer satisfaction scores. Those metrics improved when agents could find answers instantly instead of escalating or guessing.
Vidyard explores rep productivity and quota achievement. Can individual reps handle higher quotas when AI improves their workflow? That’s a performance outcome with direct revenue impact.
Decision quality: The hardest to measure, most important to track
Can managers give better feedback faster? Can they identify skill gaps earlier? Do internal mobility and promotion rates improve?
“This is the trickiest pillar but actually one of the most important ones that’s being overlooked,” Eli said. “AI tools that help us make better business decisions on the fly—that’s what we should be measuring.”
Organizations using AI coaching see measurable differences here. Employees who engage with AI coaching get promoted at 3x the rate of those who don’t. Managers give feedback more frequently and more effectively when they receive prompts before one-on-ones.
Internal mobility increases. Retention improves. Team performance strengthens over time.
Function-specific metrics matter more than organization-wide KPIs
Sarika emphasized that KPIs need to be function-specific, not one-size-fits-all.
“What ROI looks like for sales is going to be different for engineering,” she said. “I’m putting the onus back on those leaders—you tell me what problem you’re trying to solve, and then we tie measurable outcomes specifically to that.”
Sales might measure rep productivity and quota achievement. Engineering might measure developer experience survey scores and engineering output. HR might measure time to fill positions and quality of hire.
The mistake is trying to create one organization-wide AI success metric. The win is helping each function measure AI’s impact on their specific business outcomes.
For more on how AI coaching enables measurement beyond activity metrics, see how AI coaching works.
How to navigate security requirements without getting stuck
Security concerns stop more AI implementations than any technical limitation.
“Our security team is nervous about me putting personally identifiable data in AI tools like Claude and OpenAI,” one talent leader asked during the webinar. “It’s really limiting our ability to move forward. What are tactical tips?”
Start with your CIO as your partner, not your barrier
“Your CIO should be your bestie,” Christina said. “Start with a committee that includes your CTO, CIO, chief information security officer, your risk person, the head of HR. You may even want someone who heads procurement.”
This isn’t about getting permission. This is about building a policy together that addresses real risks while enabling real work.
Develop a people-specific AI policy, not just a generic AI policy
Generic AI policies cover broad usage. People-specific AI policies address:
- What tools are approved for HR data
- Where human judgment is required versus where AI can decide
- How personally identifiable information gets handled
- Who has admin rights and what those rights mean
- Tool-by-tool risk assessment and access levels
“We started to get a lot more specific to each tool,” Sarika said. “Who has access to what, who has admin rights, and what does it mean to have admin rights. Because there’s AI incorporated in our new performance tool, employees are much more sensitive about what this data is being used for, who has access to it, how we’re using it. These were never questions we were asked with our pre-AI performance tool.”
Anticipate vetting processes and plan accordingly
Wix’s vetting process for AI vendors is “gruesome,” according to Eli. “It can take a long long long amount of time. But we do approve certain vendors that use AI.”
The process can involve figuring out security requirements that didn’t exist before. Things change so fast that what one organization figures out may not be repeatable three months later.
Security requirements will slow you down. Partnership with IT and procurement speeds you back up.
Ask vendors for SOC 2, ISO certifications, and clear data handling documentation
Don’t guess what security documentation you need. Ask vendors directly:
- SOC 2 Type II certification
- ISO 27001 certification
- GDPR alignment documentation
- Data encryption standards
- Where data is stored and who has access
- Whether customer data trains AI models (it shouldn’t)
Vendors building for enterprise understand these requirements. If a vendor can’t provide documentation quickly, that’s a red flag.
4 steps to start implementing AI in talent development
The practitioners offered specific next steps for organizations in planning or pilot phases.
1. Run pilots and expect messiness
“We are going through these procurement processes all the time. No two are exactly the same,” Kirsten Moorefield noted. “Things change so fast right now. The goals—what AI offers—are so good, but everyone is really trying hard to figure it out. Optimistic persistence, everybody.”
Pilots reveal what works in your specific context with your specific people. Generic best practices don’t translate directly. Your culture, your workflows, your security requirements create unique implementation challenges.
2. Focus on workflow redesign, not just tool adoption
Adoption is the first step. Real outcomes require rethinking how work gets done.
Sarika shared Zapier’s approach: they created automation engineer roles. These aren’t people who also deliver against functional OKRs. Their full-time job is redesigning workflows using AI within their function.
“In talent acquisition or the people function, they’ve got someone who’s an HR automation engineer,” Sarika explained. “She’s been in HR so she understands the processes. No one has to teach her that. But she also understands the products really well. She can take a problem and figure out how it could be redesigned using different tools and orchestration.”
Most organizations can’t afford dedicated roles yet. The principle holds: someone needs dedicated time to redesign workflows, not just train people on how to use AI tools.
3. Ask vendors to help you measure impact beyond activity metrics
Vendors can track metrics traditional HR systems couldn’t see.
“Work with your vendors on how to get these metrics,” Kirsten said. “We’ve done this customized with different customers. You really can be very creative and ask for anything. The worst that can happen is somebody says no.”
Organizations using Cloverleaf’s AI coach see employees get promoted at 3x the rate of those who don’t engage with coaching. That metric exists because customers asked for it.
What metrics matter for your organization? Ask vendors if they can track it. If they can’t now, they might build it if enough customers request it.
4. Decide build versus buy based on speed and complexity
Sarika wrestled with whether to build custom AI tools or buy existing platforms.
“The question still remains: Do I really need to be thinking about building? Because I don’t actually think the build always works. Sometimes buying makes more sense because that particular platform has maybe figured out something that connects more of the dots that I can’t connect—and it’s faster.”
Building makes sense when you’re orchestrating multiple tools with organization-specific data. Buying makes sense when a platform solves a complete problem and integrates with your existing systems.
The answer isn’t always one or the other. Sometimes you buy multiple tools and build the orchestration layer in the middle.
For guidance on supporting managers through transitions with AI coaching, see how to support new managers in their first 90 days.
The three biggest AI implementation failures share a common root: solving the wrong problem, expecting too much too fast, or skipping the human judgment step. The wins share a pattern too: removing friction from real workflows, measuring business outcomes instead of activity, and giving people dedicated time to redesign how work happens.
Security requirements and procurement processes will slow you down. Partnership with IT and vendors who understand enterprise needs speeds you back up. Start with pilots. Expect messiness. Ask vendors for custom metrics. And remember: AI adoption is the beginning, not the goal. Real transformation happens when workflows change.