top of page

Are Large Language Models the future of innovation—or its biggest threat?


Wichtigkeit des Innovationsmanagement

Innovators are turning to Large Language Models (LLMs) like ChatGPT, Google Bard, Microsoft Bing, Meta's LLaMa, and others to speed up tasks like user- and problem-research, ideation, problem-solving up to business model validation. While these tools offer great potential, they can do more harm than good if treated as the absolute truth. I’m a big fan of these tools and frequently use them to explore new applications in innovation, but true innovation requires diving into the unknown, understanding customer needs, and validating assumptions through real-world interaction. Think of innovation as if you were a scientist exploring the unknown. LLMs, trained on past data, can’t replace the insights gained from firsthand experiences with people. They can’t take you to where true innovation happens: into the field, not behind your keyboard.

AI may assist in the creative process, but it cannot replace the human touch that’s required to make a breakthrough.

The core of innovation: Asking questions to find causality

Great innovation starts with asking the right questions: What keeps people up at night or causes them worry? Why is that relevant? What do customers say they want, and what do they really need? What is the root cause? Why do they care? What would truly improve their lives?


AI can generate a wide variety of thoughts and assumptions, but it doesn’t know how to ask these questions—or whom to ask them to. And once you uncover a pain point, AI can’t help you deep dive into it in an empathetic way. That’s your job as an innovator.

Even more critically, innovation rests on three pillars:

  • Desirability—Do people even want this?

  • Viability—Can this be a sustainable business?

  • Feasibility—Can we actually build it?


Can an algorithm trained on historical patterns really validate these dimensions? No. Because understanding desirability, viability, and feasibility requires more than just crunching data and calculating probabilities—it demands context, judgment, and empathy. These are uniquely human skills (at least until today) that can’t be automated.


Automation complacency: The danger of “good enough”

Here’s where LLMs can be dangerous: they’re fast, they’re confident, and they’re usually "good enough". The goal, however, isn’t to settle for mediocrity but to push beyond the obvious. To truly innovate, you need a broader variety of ideas and to push the boundaries of the so-called "obvious answers" of these models, challenging them to present a wider range of deviations from mediocre answers to spot new ideas—a diversity that naturally emerges when you co-innovate with people outside your domain. “Good enough” isn’t innovation; it’s just the obvious—and it’s exactly what your competition is doing too.


This over-reliance on automation leads to what I call automation complacency. When we trust the tool instead of challenging assumptions, we stop seeking the hard-to-find insights—the unmet needs and hidden pain points that drive real innovation.

The answers LLMs provide are based on historical data. They excel at identifying correlations. But what about causation? What about understanding why customers behave the way they do? AI can’t tell you that. It can’t predict how people will respond to something truly new. And it certainly can’t replicate the iterative process of forming hypotheses, testing them, and learning from failure—the process that defines real innovation.


Automation bias: Losing the human edge

Then there’s automation bias—the tendency to over-trust automated systems. It’s tempting to let AI do the thinking for you. But when that happens, you risk losing what makes innovation truly possible: human intuition and creativity.


Here’s a reality check: innovation isn’t a spreadsheet exercise. It’s messy, emotional, and deeply rooted in empathy. Customers don’t buy products—they buy solutions to their unsolved problems. Yet often, they can’t articulate what they need or why they want it. Sometimes, they might even be politely misleading—avoiding exposing a lack of knowledge, budget, authority, or other limitations. This can send you on a journey of false assumptions.


If you’re not walking in their shoes, truly listening to their frustrations, and observing how they interact with the world, you’ll miss what really matters. AI can’t feel empathy. It can’t replace the gut-level understanding that comes from direct conversations with real people.


Creativity isn’t a formula

LLMs excel at optimizing repetitive tasks. But innovation isn’t repetitive. It’s a combination of explicit knowledge (things we can teach) and implicit knowledge (things we learn by doing). Machines can handle the first part, but they fall flat on the second. Any changes to the model require retraining or fine-tuning with additional data. Humans, on the other hand, do this through active reflection and adaptation: When humans experiment, they don’t just gather data—they actively reflect on the results, reframe the problem, and adapt their approach.


Yet, the most creative breakthroughs don’t come from data—they come from serendipity. From tinkering, experimenting, and failing. If you skip that messy, iterative process, you’re not innovating; you’re just recycling what’s already out there. And anyone with access to this technology is doing the same, meaning everyone on this planet.


No hypotheses, no testing, no learning

At its core, innovation is about learning. You form hypotheses, test them, and use the results to iterate. Without hypotheses and validation loops, there’s no feedback. And without feedback, there’s no progress. And without a process, no one will finance your endeavor.


Here’s the kicker: AI doesn’t learn like humans. It doesn’t adapt to failures or reframe the problem when things don’t work. That’s your job. If you rely too much on automation, you risk losing the ability to do this effectively.


Can AI be creative?

There are voices in the AI community that argue today's models are not only capable of generating novel ideas, but that they can also demonstrate a form of creativity. These advocates point to AI’s ability to produce unique combinations of information or solve problems that humans might not have considered. LLMs can be trained to adapt to new data or feedback, and they are often praised for their creative applications in fields like design, music, and even scientific discovery.


But let’s be clear: there is a difference between generating creative outputs and having a creative process. AI can certainly synthesize information in ways that seem creative, but it lacks the human qualities that truly define creativity—empathy, intuition, and judgment. AI can’t feel the nuances of a customer’s needs, nor can it understand the emotional weight behind a product’s design. These are distinctly human traits that are critical for real innovation.


Conclusion: Large Language Models - Powerful tools, not substitutes

Let’s be clear: tools like ChatGPT are valuable. They’re fast, efficient, and great for generating ideas or finding quick answers. But they’re just tools. They can’t replace the hard work of engaging with customers, running experiments, and getting your hands dirty in the real world.


Innovation isn’t about convenience. It’s about resilience. It’s about facing uncertainty, embracing failure, and staying curious. You can’t outsource that to a machine.

So, use LLMs for what they are—a resource to save time and generate insights. But remember: the real breakthroughs come when you step out of the building, talk to your customers, and test your ideas in the wild. That’s where innovation happens.


Want to read more? Visit my blog or better subscribe to my newsletter.

Yetvart Artinyan

P.S: Do you want to know more about how to make your innovation project successful and avoiding typical pitfalls?

  1. Extend your team and knowledge on a temporary or permanent basis: Contact me for a conversation.  

  2. Transfer the knowledge: Book one of the innovation bootcamps 

  3. Get a keynote on this topic for your organization: Book a keynote now


Imprint     Privacy     GTC

© 2025 Yetvart Artinyan

bottom of page