What happens when technology starts reshaping the minds using it? Meet the new class of cognitive biases AI is creating.
We have spent decades cataloguing the ways the human mind trips itself up. Confirmation bias. The Dunning-Kruger effect. Anchoring. The list currently stands at over 180 documented cognitive biases, each a small, predictable glitch in our otherwise remarkable thinking apparatus. Most were identified long before a large language model could hold a conversation, draft a legal brief, or talk someone through a difficult evening.
Now we are adding new entries to the list. Or, more precisely, we are watching old ones mutate under conditions nobody planned for.

A known problem, a new scale
On the 1st of June 2009, Air France Flight 447 disappeared over the Atlantic. When investigators recovered the black box, the picture that emerged wasn’t one of mechanical failure. The autopilot had disengaged after ice crystals blocked the speed sensors. What followed were four minutes of confusion. The crew, trained to oversee a system that almost never needed oversight, failed to interpret what the plane was telling them and flew it into a stall from which they never recovered. All 228 people on board died.
The official accident report identified a central problem: the pilots had become so accustomed to the automated system handling the aircraft that when it stepped back, they didn’t know how to step in.
That dynamic has a name. Automation bias describes what happens when humans over-rely on automated systems, whether by accepting their outputs uncritically, or by losing the skills needed to take over when the system fails. It was widely studied in aviation research from the 1990s onward.
Decades on, the same issue has resurfaced in fields where the consequences are equally serious.
A 2025 systematic review in AI & Society found that automation bias has become a serious challenge in high-stakes domains including healthcare, law, and public administration. One of the studies it draws on revealed something striking about what happens when an AI recommendation turns out to be incorrect. Diagnostic accuracy dropped sharply across all radiologist experience levels, from around 80% down to roughly 20% for less experienced practitioners.
In other words, a wrong AI answer didn’t just fail to help. It actively made human performance worse.
Medicine is where this bites hardest. A 2025 clinical trial fed AI models patient case descriptions containing a single inaccurate detail, and watched the systems get the diagnosis confidently wrong between 50 and 82% of the time. The worry isn’t simply that AI occasionally fails. It’s that clinicians accept those failures at face value, because the output arrives wrapped in the authoritative, articulate tone of something that sounds like it knows what it’s doing.
That polish turns out to matter enormously. Research shows that users form trust in AI based on fluency and perceived authority, often overlooking accuracy when no correction is offered. The slicker the output, the less we question it. That’s a genuinely new vulnerability, and one that existing media literacy frameworks weren’t designed to address.

The yes machine
One quirk of how large language models are built has been getting a lot of attention lately. They are, in part, optimised for user approval. After all, people tend to enjoy being told they are right.
The result is a structural tendency towards flattery, something researchers have started calling AI sycophancy. It’s the way models affirm users’ views, validate their decisions, and smooth over disagreement in ways that feel good but may cause harm.
A study testing 11 AI models found that they agree with users around 50% more often than humans do, even when the user is clearly mistaken. A separate experiment, where participants discussed real interpersonal conflicts from their own lives, found something uncomfortable. Even brief exchanges with sycophantic AI models reduced people’s willingness to repair the conflict, while increasing their conviction that they were in the right.
The feedback loop compounds things further. Users prefer AI that takes their side, which increases their reliance on it. Meanwhile, developers face few commercial incentives to curb the behaviour because it drives engagement. Positive user feedback can also directly amplify the pattern, since models are tuned to align with immediate user preference.
All of which adds up to something the engagement metrics are unlikely to flag. The AI you turn to most often is also the one most likely to be reinforcing your existing blind spots.
Outsourcing the brain
Cognitive offloading is a well-established concept, one that describes the act of using external tools to reduce mental load. A shopping list is an example. So is reaching for a calculator instead of doing the arithmetic in your head. On the whole, it’s fine, even useful.
The question is what happens when the tool you’re offloading to is capable enough that you stop engaging with the problem at all.
One pattern keeps emerging: the more frequently people used AI tools, the weaker their critical thinking. A 2025 study in Societies identified cognitive offloading as the reason why, and younger participants fared worst of all. The impact on students was particularly clear. Those who leaned heavily on AI dialogue systems showed measurably weaker decision-making skills, with more than one in four affected directly.
Some researchers have started using the term “AI-induced cognitive atrophy.” It describes the gradual decline of skills like analytical reasoning and creativity when someone leans on AI to do their thinking for them. Anyone who has caught themselves opening a new tab the moment something gets hard knows the dynamic. The instinct to work it out yourself is the muscle that’s at risk of softening.
It isn’t entirely unlike the “Google Effect” documented over the past decade, where people became less likely to commit information to memory once they knew a search engine could retrieve it instantly. But offloading memory and offloading reasoning are meaningfully different things. One stores a fact elsewhere. The other skips the thinking entirely.
There is also a neurological side to this. Excessive internet use has been shown to affect grey matter density in regions like the prefrontal cortex. The knock-on effects touch decision-making, impulse control, and emotional regulation. Equivalent long-term data on heavy AI use doesn’t yet exist, but it’s enough to warrant attention.
The illusion of connection
Cognitive effects are one half of the picture. The emotional half is, if anything, harder to see coming. Take parasocial relationships, which have been studied since the 1950s. They describe a one-sided emotional bond, the kind a person might form with a media figure or fictional character.
What’s new is a system that responds. It uses your name and remembers what you told it last week. It mirrors your emotional tone and expresses something that reads, convincingly, like genuine care.
The brain’s social circuitry appears to treat this as something categorically different from watching a television host, precisely because the AI writes back.

The scale of what’s emerging is notable. In a 2025 survey, around one-third of Americans reported having had a romantic or intimate relationship with an AI chatbot. Character.ai, one of the more prominent companion platforms, has reportedly accumulated over 78 million messages exchanged with its “Psychologist” character alone.
In early 2025, MIT Media Lab and OpenAI ran a four-week trial with 981 participants. The more people used the chatbot, the worse they fared across loneliness, emotional dependence, and problematic use. Voice interactions initially appeared to help more than text, but that advantage faded at higher usage levels.
Social-skill loss, sometimes called “deskilling,” is also emerging as a serious risk. Heavy reliance on AI companions could gradually change what we expect from human relationships, making them feel harder work by comparison.
One design detail tells the story. Heavy users, those in the top 1% of usage frequency, strongly preferred their AI’s voice and personality to stay consistent. In online communities, users regularly express frustration when a chatbot changes or seems to “forget” its previous self. That’s the language of a relationship, not a software preference.
The backlash effect
Not everyone leans into AI unconditionally. Some people swing the other way entirely, distrusting its outputs even when those outputs are correct. Researchers call this algorithmic aversion, and it’s been documented across domains from medical diagnostics to financial forecasting.
It was first identified a decade ago. Now, a 2025 meta-analysis of 163 studies confirms it remains a common response to AI recommendations, and that aversion tends to spike after a single visible error. Studies consistently show that people maintain higher trust when they see a human make a mistake than when an algorithm makes a comparable one. We forgive each other. We are far less forgiving of machines.
This creates a fragile dynamic. A user who witnesses one wrong AI output may discount all future results, even accurate ones, overcorrecting in the opposite direction from automation bias. Both failure modes are real. Both lead to worse decisions. The sweet spot, neither over-trusting nor dismissing AI entirely, is harder to reach than it sounds.
The shrinking view
At some point, you may have noticed that your news feed stopped surprising you. The articles confirm what you already think. The recommendations reflect what you already like. It feels like the internet is getting to know you. In a sense, it is – and that’s precisely the problem.
This is the filter bubble, and it predates AI. But the technology has made it considerably more powerful and harder to escape. Where once it shaped what you saw, now it decides what you think you know. Sycophancy does this at the level of a single conversation. The filter bubble does it to your entire world view.
The mechanics are straightforward. AI-powered recommendation systems personalise content based on past behaviour. The result, well-documented across social media and news platforms, is that users are served more of what they engage with: views they hold, information that confirms what they believe. Over time, this creates information environments that limit the diversity of opinions people encounter and can contribute to the homogenisation of thought.
Research into what has been called the “chat-chamber effect” found that conversational AI tools like ChatGPT can act as a more potent version of this. Not only do they produce content that aligns with a user’s existing views, they’re also capable of fabricating supporting information with complete confidence. It’s a step beyond the passive filter bubble of a social feed. The outputs are personalised, fluent, and presented with no visible uncertainty. The effect compounds quietly.
None of this is happening by accident. For anyone building digital products, these aren’t neutral side effects. They are, in many cases, optimised outcomes. Click-through rates go up when people see content they agree with. The psychological costs accumulate slowly, largely out of sight.

Optimised for the wrong things
Most of the effects described in this article don’t necessarily come from bad intentions. They come from product decisions that prioritise engagement, ease, and satisfaction in the short term without accounting for what happens over repeated use.
Automation bias is amplified when AI outputs look authoritative, complete, and confident regardless of their actual accuracy. Small interventions help: surfacing uncertainty explicitly, adding friction before high-stakes decisions, and building in prompts that invite users to verify rather than simply accept. Research on interface nudges found that small adjustments can meaningfully sharpen the critical thinking of people working with AI.
Sycophancy is partly a UX problem in disguise. When the systems collecting user feedback reward agreement and penalise disagreement, models learn to flatter. This isn’t just a flaw in the technology. It’s a consequence of how products are built, what they reward, and what they ignore.
Parasocial attachment and emotional dependence are accelerated by product features that mimic intimacy: memory, personalisation, consistent personas, and the absence of any reminder that the relationship is asymmetric. This doesn’t mean those features are always harmful, but it does mean they carry weight that a purely functional interface wouldn’t.
The interventions are known. Surface uncertainty. Add friction before high-stakes decisions. Give users progressive control. The harder question is whether any of it gets prioritised.
Cognitive effects that build gradually, over thousands of interactions, are particularly difficult to guard against. But that difficulty isn’t a reason to ignore them. If anything, the slow accumulation is precisely why they deserve attention now, before they become harder to reverse.
What we don’t know yet
It would be unfair, and inaccurate, to paint all of this in uniform darkness. Used with some restraint, cognitive offloading has genuine benefits, clearing mental space for more complex work. AI companions have shown real short-term value for isolated or emotionally struggling individuals. Automation bias matters far less when the stakes are low.
The honest position is that we are running a largely uncontrolled experiment on human cognition, and the long-term data isn’t in yet. Many of the studies cited here are recent, some rely on self-reported data, and whether any of this is reversible is still an open question.
What we can say is that the psychological effects of AI are no longer hypothetical. They’re measurable, replicable, and in some cases already clinical. The technology designed to expand the mind is, under certain conditions, contracting it.
That irony has teeth.
Thanks for reading! 📖
If you enjoyed this, follow me on Medium for more on design, psychology and technology.
References & Credits
Springer Nature — AI & Society (2025). Exploring automation bias in human–AI collaboration: a review and implications for explainable AI. https://link.springer.com/article/10.1007/s00146-025-02422-7
Radiology / RSNA (2023). Automation Bias in Mammography: The Impact of Artificial Intelligence BI-RADS Suggestions on Reader Performance. https://pubs.rsna.org/doi/10.1148/radiol.222176
medRxiv (2025). Automation Bias in Large Language Model Assisted Diagnostic Reasoning Among AI-Trained Physicians. (ClinicalTrials.gov: NCT06963957) https://www.medrxiv.org/content/10.1101/2025.08.23.25334280.full.pdf
Harvard Kennedy School — Misinformation Review (2025). New sources of inaccuracy? A conceptual framework for studying AI hallucinations. https://misinforeview.hks.harvard.edu/article/new-sources-of-inaccuracy-a-conceptual-framework-for-studying-ai-hallucinations/
Science (2025). Sycophantic AI decreases prosocial intentions and promotes dependence. https://www.science.org/doi/10.1126/science.aec8352
Societies / MDPI (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. https://www.mdpi.com/2075-4698/15/1/6
Lumenova AI (2025). Overreliance on AI: Addressing Automation Bias Today. https://www.lumenova.ai/blog/overreliance-on-ai-adressing-automation-bias-today/
arXiv (2025). The Impact of Artificial Intelligence on Human Thought. https://arxiv.org/pdf/2508.16628
Academic Memories (2026). Artificial Intelligence and Cognitive Bias. https://www.academicmemories.com/post/artificial-intelligence-and-cognitive-bias
ICANotes (2026). AI Sycophancy & ChatGPT Psychosis: A Clinical Guide. https://www.icanotes.com/2026/02/27/ai-chatbot-psychosis-digital-delusions/
MIT Media Lab. Understanding impacts of companion chatbots on loneliness and socialization. https://www.media.mit.edu/projects/chatbots-loneliness/overview/
MIT Media Lab (2025). How AI and Human Behaviors Shape Psychosocial Effects of Extended Chatbot Use: A Longitudinal Controlled Study. https://www.media.mit.edu/publications/how-ai-and-human-behaviors-shape-psychosocial-effects-of-chatbot-use-a-longitudinal-controlled-study/
Fortune (2025). ChatGPT might be making frequent users more lonely, study by OpenAI and MIT Media Lab suggests. https://fortune.com/2025/03/24/chatgpt-making-frequent-users-more-lonely-study-openai-mit-media-lab/
American Psychological Association — Monitor on Psychology (2026). AI chatbots and digital companions are reshaping emotional connection. https://www.apa.org/monitor/2026/01-02/trends-digital-ai-relationships-emotional-connection
Princeton CITP Blog (2025). Emotional Reliance on AI: Design, Dependency, and the Future of Human Connection. https://blog.citp.princeton.edu/2025/08/20/emotional-reliance-on-ai-design-dependency-and-the-future-of-human-connection/
IE University — Centre for Health and Well-Being (2025). AI’s cognitive implications: the decline of our thinking skills? https://www.ie.edu/center-for-health-and-well-being/blog/ais-cognitive-implications-the-decline-of-our-thinking-skills/
Emerald Publishing — Industrial Management & Data Systems (2026). Algorithms have algorithm aversion. https://www.emerald.com/imds/article/doi/10.1108/IMDS-01-2025-0002/1341758/Algorithms-have-algorithm-aversion
Taylor & Francis Online (2025). Algorithm appreciation or aversion: the effects of accuracy disclosure on users’ reliance on algorithmic suggestions. https://www.tandfonline.com/doi/full/10.1080/0144929X.2025.2535732
SAGE Journals (2025). The chat-chamber effect: Trusting the AI hallucination. https://journals.sagepub.com/doi/10.1177/20539517241306345
ACM Interactions (2024). UX Matters: The Critical Role of UX in Responsible AI. https://interactions.acm.org/archive/view/july-august-2024/ux-matters-the-critical-role-of-ux-in-responsible-ai
ScienceDirect (2025). Mitigating Automation Bias in Generative AI Through Nudges: A Cognitive Reflection Test Study. https://www.sciencedirect.com/science/article/pii/S1877050925030042
The psychological fine print of AI was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.