Here’s a conversation I keep having. A leader calls me and says, “We rolled out an AI tool three months ago. The technology works great. Nobody’s using it.”
Then they ask the wrong question: “How do we get our people to adopt the technology?”
That question assumes the problem is your people. It’s not. The problem is that you’re trying to push people toward a tool without addressing what the tool threatens. And what it threatens is not what you think.
Resistance to AI is almost never about the technology. It’s about identity, security, and trust.
That’s the thesis of this piece, grounded in research from Wharton’s Center for Leadership and Change Management, peer-reviewed organizational psychology, and hard-won lessons from organizations that got AI adoption right — and ones that got it spectacularly wrong.
The Real Diagnosis: Three Fears, Not One
Wharton professor Stefano Puntoni and his colleague Erik Hermann published a landmark paper in 2025 that finally put language to what most of us sense intuitively. Drawing on Self-Determination Theory, they found that AI threatens three basic psychological needs every person carries into the workplace:
Competence → Identity Threat: “Am I still valuable? Does my expertise still matter?”
Autonomy → Security Threat: “Will I lose my job, my career path, my control over my own work?”
Relatedness → Trust Threat: “Can I trust my leaders? Is this process fair? Is the AI even reliable?”
Every act of resistance you’re seeing — the quiet non-adoption, the eye rolls, the “I’ll get to it later” — maps to one of these three fears. And the data confirms it. Microsoft found that 52% of AI users are reluctant to admit they use it at work. Pew Research reports that 52% of U.S. workers are more worried than hopeful about AI. But here’s the number that should keep every leader up at night: Accenture found that 94% of workers are ready to learn AI skills. Only 5% of organizations are actually teaching them.
Your people aren’t resisting AI. They’re waiting for you to lead.
Fear #1: Identity — “Am I Still Valuable?”
This is the deepest fear, and the one leaders most often miss. People don’t just do their work. They are their work. The teacher who crafts her own lesson plans. The pastor who writes his own sermons. The grant writer who takes pride in her prose. When you introduce a tool that can do those things in seconds, you’re not just changing a process — you’re poking at the core of who someone believes they are.
Puntoni’s research shows that people readily accept AI for utilitarian tasks — scheduling, data entry, formatting — but actively resist it for identity-driven skills, the work that makes them them. Loss aversion makes it worse: behavioral economics research shows that losing a piece of your professional identity feels roughly twice as painful as gaining a new skill feels good.
What to do: Before you launch any AI tool, redefine the human’s role in writing. If AI handles first drafts, what does the human do? They edit, judge, contextualize, and decide. Make that the new job description before the tool goes live. And change the scorecard: if you measure speed and volume, AI wins and humans feel worthless. If you measure judgment quality, client relationships, and creative problem-solving, humans win and AI becomes their amplifier.
What to say Monday morning: “AI will handle the first draft. Your job is shifting to the part that requires your judgment — the part no machine can do. Your 15 years of experience aren’t going away. They’re what make you the person best equipped to use these tools wisely.”
Fear #2: Security — “Will I Lose My Job?”
A YouGov poll found that 48% of U.S. workers believe AI will reduce jobs in their industry — up from 29% just eighteen months earlier. And this fear arrives on exhausted ground: Gartner found that employee willingness to support organizational change collapsed from 74% in 2016 to just 43% by 2022. AI isn’t landing on fresh soil. It’s landing on a workforce that’s already running a 31-point trust deficit.
What to do: Create a sandbox — a risk-free environment to experiment where nothing affects performance reviews. Announce reskilling before deployment, not after. Publish a clear chart of which decisions AI can inform and which require human final judgment. Prosci’s research shows 63% of AI implementation failures are human, not technical. People don’t fear competent AI. They fear invisible AI.
A story worth stealing: When IKEA’s AI chatbot absorbed 47% of its customer service calls, they retrained 8,500 displaced workers as interior design consultants. The new service generates over €1.3 billion in revenue. No net jobs lost. “Augment, don’t automate” only works if you redesign roles before the technology displaces them.
Fear #3: Trust — “Can I Believe You?”
Gartner reports that only 36% of employees trust their organizations. Nearly two-thirds of your people are receiving your AI announcement from a posture of suspicion. They’re not hearing “exciting new tool.” They’re hearing “what aren’t they telling us?”
Trust requires three things in AI adoption: trust in leadership intent (“Are you doing this for us or to us?”), trust in process fairness (“Do I have a voice?”), and trust in the AI itself (“Is this tool reliable?”). Epic Systems learned this the hard way — their sepsis prediction algorithm claimed 76–83% accuracy, but an independent study found it missed 67% of cases. Clinicians developed alert fatigue and stopped trusting any AI warning. The opacity was the failure, not the technology.
What to do: Build governance with your people, not over them. Create an AI review process that includes frontline team members. Audit the unwritten rules — if managers privately penalize AI use while the company formally promotes it, the unwritten rule wins every time.
What to say Monday morning: “I’m going to be honest: we don’t have all the answers about where AI takes us. But here’s what I can promise — we will not make a single change to your role without talking to you first.”
Your First 90 Days
Days 1–30: Map your stakeholders. Conduct a fear audit using the three-bucket model. Select a small pilot team of willing volunteers — not your most tech-savvy people, your most respected ones. Create a sandbox. Draft your acceptable-use policy with employee input.
Days 31–60: Harvest stories from the pilot. Audit unwritten rules. Adjust levers based on what you’re learning. Let the pilot team recruit the second wave — not management.
Days 61–90: Publish results transparently. Formalize AI literacy pathways. Launch your governance group with frontline representation. Measure adoption, proficiency, trust scores, and change fatigue.
The One Trap That Sinks Everything
The single biggest mistake I see leaders make: deploying AI and promising training later. The gap between giving someone a tool and giving them the skills to use it wisely is where every fear accelerates. Accenture’s data says it all: 94% want to learn, 5% get training. That’s not a training gap. That’s an organizational betrayal. And your people can feel the difference.
The technology will keep getting smarter. The question is whether your organization will get wiser. It starts with a simple shift: stop asking “How do I get my people to adopt AI?” and start asking “What are my people afraid of, and how do I lead them through it?” That’s not a technology problem. That’s a leadership problem. And it’s one you already know how to solve.