The Innovator’s AI Dilemma

Here’s a question that should keep every leader up at night: What is generative AI actually doing to our ability to think critically?

Not “could do” or “might do” in some distant future. What is it doing right now, today, to the 70% of professionals who use these tools weekly?

I’ll confess upfront: I’m one of those people. I used AI to help me find the research for this article. I used AI to help me get started with writing it (though there’s still a heavy human hand involved—my kids would be disappointed if their dad got fully replaced by a chatbot). So I’m not writing this from some ivory tower of technological purity. I’m writing it as someone who genuinely loves these tools and is starting to wonder if that love might be a little too uncritical.

We finally have some data to work with. And it’s not the dystopian “AI will replace us” narrative that dominates headlines. It’s more subtle, more insidious, and in many ways more concerning.

AI is becoming a cognitive crutch that we’re leaning on more heavily each day without noticing the weight shifting off our own legs.

The Study That Should Change How You Work

Researchers from Carnegie Mellon University and Microsoft Research just published a landmark study in the proceedings of CHI 2025, one of the most prestigious conferences in human-computer interaction. They surveyed 319 knowledge workers who use AI tools like ChatGPT and Copilot at least once a week, collecting 936 real-world examples of how these professionals actually use AI in their jobs. Real tasks. Real deadlines. Real pressure.

The headline finding: Higher confidence in AI is associated with less critical thinking.

The more you trust the tool, the less you engage your own brain. The correlation is clear and statistically significant across multiple measures.

Another key finding: Higher self-confidence in your own abilities is associated with more critical thinking, even though it requires more effort.

People who believed in their own skills worked harder at thinking critically, even when AI could have handled things for them. They chose the harder path. They kept their weight on their own legs instead of leaning on the crutch.

What “Less Critical Thinking” Actually Looks Like

The researchers used Bloom’s taxonomy, a well-established framework for categorizing cognitive activities, to measure what’s happening to our thinking when we use AI. They looked at knowledge recall, comprehension, application, analysis, synthesis, and evaluation.

Across nearly all these dimensions, workers reported that AI made cognitive tasks feel easier. Which sounds great, right? Efficiency! Productivity! More time for the important stuff!

But “feels easier” and “is better” aren’t the same thing. Stop using a muscle, and it atrophies.

The study identified three fundamental shifts in how we think when AI enters the picture:

First: From information gathering to information verification. One lawyer in the study noted that “AI tends to make up information to agree with whatever points you are trying to make, so it takes valuable time to manually verify.”

Many workers skip the verification step entirely when they trust the tool. And as they use AI more, they trust it more.

Second: From problem-solving to response integration. Instead of wrestling with problems ourselves, we’re increasingly in the business of evaluating AI-generated solutions and adapting them to fit our context. A marketing professional observed that AI content required “substantial editing to align with specific marketing guidelines and tone preferences.”

You can only evaluate and adapt effectively if you have your own mental model of what good looks like. If you’ve never solved the problem yourself, how do you know the AI’s solution is actually good? It’s like asking someone who’s never cooked to judge a cooking competition. They might know if something tastes bad, but they can’t tell you why or how to fix it.

Third: From task execution to task stewardship. We’ve become managers of AI output rather than creators of original thought. The researchers use the word “stewardship” deliberately; it implies oversight without ownership, supervision without deep engagement.

Why This Matters for Innovation

For leaders navigating the most rapidly changing business environment in history, this research raises uncomfortable questions.

Critical thinking is the foundation of innovation. It’s what allows you to look at a market everyone else has written off and see opportunity. It’s what enables you to question assumptions that competitors take for granted. It’s the cognitive muscle that powers creative problem-solving when there’s no playbook to follow.

And that muscle, this study suggests, may be quietly atrophying.

The pressure to innovate has never been more intense. Disruption cycles that used to take decades now unfold in months. The leaders who thrive aren’t the ones with the best AI tools. Everyone has access to those. The winners are the ones who can think differently, see around corners, connect dots that others miss.

That kind of creativity comes from pushing past the obvious, questioning the default, wrestling with problems until something genuinely new emerges.

If your team is leaning on AI for every brief, every strategy document, every brainstorm, what happens to that creative capacity over time?

The Confidence Paradox

The finding I keep returning to is this: People with higher self-confidence engaged in more critical thinking, even though they reported it required more effort.

This seems backwards at first. Shouldn’t confident people be the ones most likely to coast? To say “I’ve got this handled” and let AI do the heavy lifting?

The opposite is true, and I think I understand why.

When you’re confident in your own abilities, you have a standard. You know what excellence looks like because you’ve produced it yourself. When AI gives you something that’s merely adequate (competent but generic, correct but soulless), you notice. You push back. You improve.

But when you’ve never developed that internal compass for quality, you don’t know what you’re missing. The AI output looks fine because you lack a mental model of what “great” could be.

This has serious implications for how we develop talent. A generation of workers who lean on AI from day one may never build the cognitive strength to evaluate whether AI is serving them well. They’ll have nothing to compare against. The crutch will feel like a natural extension of themselves because they never learned to walk without it.

Three Barriers to Thinking for Yourself

The study identified why workers don’t engage in critical thinking even when they probably should. These barriers cluster into three categories:

Awareness barriers: People simply don’t realize critical thinking is needed. They trust AI for “simple” tasks without questioning whether the task is actually simple or whether AI is actually trustworthy for it. Several participants expressed what the researchers call “overestimating AI capabilities”—assuming the tool could handle things it demonstrably cannot.

I’m guilty of this one. More than once I’ve caught myself nodding along with an AI response that sounded authoritative, only to realize later that it was confidently wrong. These tools have the tone of a tenured professor even when they’re making things up.

Motivation barriers: Even when people know they should think critically, they often don’t have time or incentive. A sales representative explained: “The reason I use AI is because in sales, I must reach a certain quota daily or risk losing my job. Ergo, I use AI to save time and don’t have much room to ponder over the result.”

Ability barriers: Some workers want to evaluate AI output but don’t know how. They lack domain knowledge to spot errors, or they can’t figure out how to improve AI responses even when they sense something is wrong. One participant received negative feedback on an AI-assisted document but admitted, “I’m not sure how I could have improved the text that ChatGPT wrote.”

These barriers compound each other. When you don’t have time to think critically, you don’t build the skills to do it well. When you can’t do it well, it feels pointless to try. The crutch becomes load-bearing.

Time to Pump the Brakes

Most organizations have spent the past two years in full acceleration mode with AI. Deploy faster. Adopt wider. Integrate deeper. The competitive pressure is real, and no one wants to be left behind.

But this research suggests it might be time to step back and ask some harder questions.

What is our AI strategy actually optimizing for? If it’s purely speed and efficiency, we may be systematically undermining the creative capacity that drives long-term differentiation. The quarterly gains could be masking a slow erosion of the very capabilities that built the company in the first place.

Building a culture of innovation has always required a delicate balance: enough structure to execute, enough freedom to experiment. AI adds a new variable to that equation. It’s a powerful tool that, used thoughtlessly, can quietly homogenize thinking across your entire organization. When everyone uses the same AI to generate the same kinds of outputs, where does differentiation come from?

If innovation is your competitive advantage, this question deserves more than a shrug.

What Leaders Should Actually Do

This isn’t a call to abandon AI. That ship has sailed, and frankly, these tools offer genuine value. The question is how to capture that value without sacrificing the human capabilities that matter most.

Audit your team’s AI dependence. Not with suspicion, but with curiosity. Where is AI being used? For what kinds of tasks? Are there domains where your people have stopped developing their own judgment because AI handles it? You can’t address a problem you haven’t mapped.

Create intentional spaces for unassisted work. Consider building “AI-free” time into certain projects, particularly early-stage ideation and strategic thinking. 

The goal isn’t to make work harder for its own sake. It’s to ensure your team maintains the cognitive muscles that AI can’t replicate. The struggle itself is where capability develops.

Reward thinking, not just output. If your culture celebrates speed above all else, you’re training people to skip the reflection that produces breakthrough ideas. Find ways to recognize the kind of rigorous thinking that takes longer but yields more original results.

Model the behavior yourself. Leaders who visibly wrestle with hard problems, who share their thinking process, who admit uncertainty, who revise their views based on evidence, create permission for others to do the same. If you’re outsourcing your own thinking to AI, your team will follow.

Build verification into the workflow. Don’t rely on individuals to self-police their AI use. Create checkpoints where AI-generated work gets scrutinized. Make cross-referencing and source-checking a normal part of how work gets reviewed.

Invest in developing judgment, not just skills. Training programs often focus on how to use AI tools more effectively. That’s fine, but incomplete. The scarcer capability is knowing when AI output falls short, and that requires domain expertise developed through direct experience.

The Long Game

The most valuable thinkers in the coming decade will be bilingual, fluent in both AI-assisted work and unassisted thinking. They’ll know when to leverage the tools and when to put them down. They’ll maintain cognitive capabilities that allow them to evaluate AI output rather than simply accept it.

AI can give you fast. AI can give you competent. AI can give you good enough.

But the ideas that change industries, that solve problems no one else can crack, that create genuine differentiation, still require a human brain that’s been doing its reps.

The crutch is comfortable. That’s exactly why it requires conscious management.

The organizations that thrive won’t be the ones that adopt AI fastest. They’ll be the ones that figure out how to use AI strategically while preserving, and developing, the human capacity for original thought.

That’s the innovator’s dilemma of our moment. And how you resolve it will shape what your organization is capable of for years to come.

Read More

The Innovator’s AI Dilemma

Here's a question that should keep every leader up at night: What is generative AI actually doing to our ability to think critically? Not "could ...

Are Your Meetings Killing Innovation? A Simple Reset That Gets Ideas Flowing Again

 If you’re a leader who’s ever led a brainstorm of any kind, you’ve probably had this experience. You open up the floor for ideas, and ...

New Thinking for the New Era of Business

Albert Einstein famously noted, “We cannot solve our problems with the same thinking that we used when we created them.” In our post-COVID world of ...

When an Astronaut Needs a Pen

Ever get stuck on a problem, only to realize you're solving for the wrong thing? That's exactly what happened when the rocket scientists at NASA ...

How Shake Shack Drives Innovation

Do you prefer the crispy mozzarella, tempura watercress, and black garlic mayonnaise cheeseburger or the pumpkin mustard, bacon, cranberries, and sage hot dog? For something ...

Lady Gaga’s Secret to Creativity

Just before she won the Academy Award for Best Original Song, I watched Lady Gaga dazzle the live audience with a pitch perfect performance of ...

Creativity: Does Size Matter?

For some reason, we’ve been taught that for creativity and innovation to count they need to have a magnitude the size of the 1989 San ...

The Lexicon of Creativity

There’s more confusion around the meaning of the word innovation than the chaos at the airline ticket counter after a cancelled flight. Is there a difference between ...

The Brain Science of Becoming More Creative

When we hear stories about iconic leaders like Salesforce.com’s founder Marc Benioff, or widely celebrated virtuosos like Lin-Manuel Miranda for that matter, we immediately think ...