What Andy Clark's "Extended Mind" Tells Designers About Working with AI in 2026
- Nuriye Sultan Kostak
- 4 days ago
- 9 min read
A design perspective on a recent Nature Communications article, and what it changes about how I think about AI tools in my daily practice.
TL;DR. In May 2025, philosopher Andy Clark published "Extending Minds with Generative AI" in Nature Communications. His core argument: the fear that AI is making us dumber comes from a flawed self-image, not from the tools themselves. For product designers working with AI tools every day, this reframes the question from "will AI replace us?" to "what new skill does this require?" The answer Clark proposes is metacognitive calibration, the practice of knowing what to trust, when, and how much. This post breaks down the article, the evidence it draws on, and how it applies to product design work in 2026.

Why I keep reading about AI right now
AI has moved into my daily design practice faster than I expected. Figma AI, Google Stitch, Claude, and similar tools have each changed a small part of how I work in the last year. And like most people in my field, I have been anxious about what this means. Not in a theoretical way, but in a practical one. When I accept a suggestion from a tool, is it still my thinking?
I do not want to stay passive in this anxiety. My work as a Senior Product Designer requires me to use these tools well, not avoid them. So I have been reading, academic papers, not just blog posts. I am looking for frames that hold up under pressure, not just opinions.
One frame I wrote about recently came from Boyd and Markowitz's paper on the MIRA framework, which looks at AI as something users build relationships with, not just something they use. Andy Clark's article in Nature Communications is another piece of this ongoing reading, and one of the most interesting things I have come across.
The article in one paragraph
Andy Clark is a British philosopher at the University of Sussex. In 1998, he and David Chalmers wrote an influential paper called "The Extended Mind", arguing that human thinking has never been confined to the brain. It extends into the tools we use, notebooks, maps, fingers used for counting. His 2025 article applies this idea to generative AI. The argument: AI is not replacing our minds. It is the latest layer in a long history of thinking with external tools. What changes is not whether we can think anymore, but which skills matter most when we do.
A 2400-year-old anxiety
Clark opens the article with a surprising reference. In Plato's Phaedrus, written around 370 BC, Socrates worries that the invention of writing will destroy human memory. People will stop holding things in their heads because they can put them on paper. They will seem to know things, but they will only know where the scrolls are.
Today, nobody thinks writing made us stupid. We would never trade it back. But Clark points out that the same fear now surrounds generative AI. The structure of the anxiety is identical. And this repetition across 2400 years is itself worth paying attention to.
This is not just a rhetorical trick. It is a clue. If the same fear keeps arriving with every new cognitive tool, writing, printing, GPS, search engines, AI, then maybe the fear is not really about the tool. Maybe it is about a picture we have of ourselves that does not quite fit reality.
The picture Clark wants us to change
The picture he challenges is this: our mind is what is inside our skull. Anything outside the skull is either a tool we use or a crutch we lean on. In this picture, offloading anything to an external tool is a kind of loss, a weakening of the "real" mind inside us.
Clark argues this picture has always been wrong. Human cognition has always extended into the body and the environment. We count on our fingers. We sketch problems on napkins. We draw maps to think through space. We write things down not just to store them but to think through them. The act of writing is thinking, not a record of thinking.
In Clark's framing, the mind is a network. Brain plus body plus tools. The tools are not outside cognition, they are part of it. He uses a phrase from his earlier work: we are "natural-born cyborgs." This is not science fiction. It is a description of what humans have always been.
If you accept this picture, generative AI looks different. It is not a threat to a pure, unassisted mind because that mind has never existed. It is a new participant in a network that already included books, calendars, search engines, and the notes on your phone.
The evidence that changed my mind
Reframing is nice, but Clark grounds his argument in empirical research. The study that stayed with me was about Go.
In 2017, AlphaGo beat the world's best Go players using strategies no human had considered. The expectation at the time was pessimistic. Human players would copy the AI. Creativity would narrow. The game would become monoculture.
A 2023 analysis published in Proceedings of the National Academy of Sciences by Shin and colleagues showed the opposite. After superhuman AI Go strategies emerged, human moves became more original, not less. Players did not imitate the AI. They explored new regions of the game space that the AI had made visible by breaking old assumptions.
Clark reads this as evidence for his broader point. Well-designed human-AI collaboration does not replace creativity. It reveals blind spots that centuries of human practice had obscured. The AI's strangeness, its non-human way of seeing the game, turned out to be useful precisely because it was not human.
This matters for design practice because it flips a common assumption. When I use Figma AI or Claude to generate first drafts, my worry has been that I would end up with generic output that sounds like everyone else. But if the Go study generalizes, the actual effect depends on how I use the tool. Copying it gives me generic work. Using it to see what I had been missing gives me something new.
The counterargument Clark takes seriously
Clark does not write a one-sided defense of AI. He cites a 2024 Nature paper by Messeri and Crockett that raises a serious concern. AI systems tend to lock in certain ways of thinking. They can make some approaches to research or problem-solving dominant and make alternatives harder to discover. The authors use an agricultural metaphor: a monoculture is efficient but fragile. It yields more in the short term and fails worse in the long term.
The lesson is not that AI is good or bad. It is that the outcome depends on how we design and deploy the interaction. Same tool, different design decisions, different consequences.
For product designers, this is a familiar shape. We already know that small design choices can make or break a user's experience. The question Clark's article forces is whether we apply the same rigor to our own use of AI tools as we do to the tools we design for others.
The new skill: metacognitive calibration
Clark's practical conclusion is that the skill that matters most in an AI-augmented world is not memorization or even originality in the old sense. It is knowing what to trust, when, and how much. He calls this metacognitive skill.
This is not a vague piece of advice. It breaks down into concrete decisions:
When should I trust an AI suggestion and when should I verify it?
What kinds of tasks is this tool reliable for, and what kinds am I asking it to do at its limits?
When I accept a suggestion, am I integrating it with judgment or just passing it through?
How do I notice when I have stopped questioning?
This list is not abstract. Every time I use a design AI tool, these questions apply. The honest answer is that I have not developed consistent habits around them yet. I notice that with Claude I tend to push back more, because I have more practice with it. With newer tools, I tend to accept more, because I have less intuition about their failure modes. This inconsistency is itself the problem.
The moment that unsettled me
The part of the article that stayed with me the longest is a small anecdote. Clark describes a personalized AI system built for him by computer scientist Paul Smart. It is called Digital Andy. It is a version of ChatGPT augmented with Clark's own writing, so it can answer questions using the patterns and references he has used throughout his career.
Clark says that over time, suggestions from this system start to feel like "thoughts that suddenly occur to me in conversation." He treats them the way he treats any thought that pops into his head. Not automatically accepting it, but not rejecting it either. Considering whether it actually makes sense. Deciding whether to endorse it.
This line stopped me.
I already do this with my own stream of consciousness. Not every thought that enters my head becomes a decision. Most of them I let pass. Some I examine more carefully. A few I act on. The filtering is so habitual I barely notice it.
Clark is suggesting we can apply the same reflex to AI output. Not reject it on principle. Not accept it on principle. Treat it as one more voice in the internal conversation, with the same scrutiny we apply to our own thoughts.
I am not sure I have internalized this yet. But naming it has changed something. The binary framing of "is this my idea or the AI's idea?" may be the wrong question. The better question might be "does this hold up under the same scrutiny I apply to my own thinking?"
What this means for product designers in 2026
If Clark is right, a few practical implications follow for how we work.
AI is a design material, not just a design tool. The way we integrate AI into our products, when it suggests, when it defers, when it asks, when it stays silent, shapes how users relate to it. We are not just designing features. We are designing what it feels like to think alongside a machine. This connects directly to what the MIRA framework argues from a different angle: users do not use AI, they build relationships with it, and those relationships are shaped by every design choice we make.
Friction is not the enemy. A seamless AI experience that quietly replaces user judgment is not a good experience. It is a design failure that costs users the metacognitive muscle they need. The best AI interactions preserve the user's ability to question, compare, and refuse.
Transparency over magic. When an AI suggests something, the user needs enough context to calibrate trust. "Here is a recommendation" is weaker than "Here is a recommendation, here is what it is based on, here is how confident the system is." This is not just ethical design. It is functional design, because trust without calibration eventually collapses.
Our own practice matters. The way designers use AI in our own workflows is not separate from the products we build. If we accept AI output uncritically in our own work, we will tend to design products that invite the same behavior. The quality of our tools reflects the quality of our thinking with tools.
What I am still working out
Clark's article made me more comfortable with some things and more uncomfortable with others.
It helped me let go of the idea that using AI is somehow cheating. Humans have always thought with external tools. AI is the newest one. Using it well is not a compromise of my thinking, it is part of how thinking works now.
But it made me more careful about the specific question of authorship and responsibility. When I put a design in front of a stakeholder and say "I made this," what exactly do I mean? The answer is no longer as clean as it used to be, and I do not think pretending otherwise is honest.
I also do not yet know how to teach these skills, either to myself or to others. Metacognitive calibration is not something you can learn from a checklist. It develops with practice, and the practice is uncomfortable. You have to sit with the uncertainty of not knowing whether your judgment is accurate.
What I am committing to, for now, is to keep reading. And to keep writing about what I read. Not because I have conclusions, but because I think this is the moment when we should all be working this out in public rather than alone.
Further reading
Clark, A. (2025). Extending Minds with Generative AI. Nature Communications 16, 4627.
Clark, A., & Chalmers, D. (1998). The Extended Mind. Analysis 58(1), 7 to 19.
Shin, M., Kim, J., van Opheusden, B., & Griffiths, T. L. (2023). Superhuman artificial intelligence can improve human decision-making by increasing novelty. Proceedings of the National Academy of Sciences.
Messeri, L., & Crockett, M. J. (2024). Artificial intelligence and illusions of understanding in scientific research. Nature 627, 49 to 58.
Related posts on this blog:
Let's work on this together
If you are designing AI-powered products and want a designer who takes both the research and the practice seriously, I would love to talk. I work with startups and established teams on product design, design systems, and AI-integrated user experiences that respect user judgment instead of replacing it.
You can reach me through LinkedIn or at nuriyesukostak@gmail.com. My portfolio is at nuriyesultan.com.
This post is part of my "Ne okudum ne anladım" series, where I break down academic articles at the intersection of AI, psychology, and design practice. If you enjoyed it, the Instagram version is a shorter carousel summary, and there is more coming.
Nuriye Sultan Kostak is a Senior Product Designer with 7+ years of experience in B2C and B2B products. Her background spans mobile health SaaS (where she led end-to-end product ownership at HiDoctor, shipping design systems from scratch and improving payment conversion by around 50%) and CRO-focused design at Invesp (66% A/B test success rate, +55% conversion uplift). She holds a degree in Industrial Design from İTÜ and writes about the intersection of product design, behavioral psychology, and the tools that shape both.



Comments