The Invisible Hand of AI Addiction: What it Looks Like and Means to Be in Its Grip
A weekly round-up of news, perspectives, predictions, and provocations as we travel the world of AI-augmented work.
AI dependency is more widespread and problematic than you think. It’s pernicious, not always easy-to-detect effects are likely to spread further and become more problematic as the technology gets more powerful, more intuitive, and we entrust more of our lives, particularly our thought processes, to it.
In a recent Bloomberg article titled, “Addicted to ChatGPT? Here’s How to Reclaim Your Brain,” users confess a growing reliance on generative AI tools like ChatGPT, Gemini, and Claude—not as assistants or genius-level interns, but as virtual co-equals they fully entrust to make important decisions, orchestrate their work, manage their calendars, organize their thoughts, even mediate their emotions. The upsides are numerous and may ultimately outweigh the downsides, but make no mistake, there are downsides, particularly as we develop a dependence on these tools bordering on - and sometimes crossing over into - addiction.
On the AIX Factor podcast, Ryan Carrier, Executive Director of ForHumanity, spoke about the invisible hand of social media - the “nudges, deceptive design, and dark patterns” that guide our behavior without our full awareness. Digital addictions are not new, but AI addiction is different in degree and kind. To begin with, AI doesn’t just influence our choices, it makes them for us. Unlike social media, which demands our constant attention, generative AI offers the illusion of offloading our cognitive load; the more it helps, the more we hand over.
The Bloomberg article cites a 2025 study by Dutch management consultancy BearingPoint that found young employees use AI tools far more than senior managers, often because they’re still developing an “internal compass.” The study suggests that this growing dependence on AI could make young professionals unsure of their own judgment, struggle with self-confidence, and develop impostor syndrome.
The following discusses two new studies and several recent articles that address the topic of addictive AI, and concludes with thoughts on good AI habits and “hygiene.”
The Invisible Pull of AI
Generative AI is designed for engagement. It feels intuitive, responsive, even eerily human-like. Studies have found that users form deep attachments to AI tools based on perceived anthropomorphism, interactivity, intelligence, and personalization—all characteristics that create a powerful “flow experience,” where users feel absorbed and immersed. This experience, combined with “emotional attachment,” is what researchers say fuels addiction.
Tao Zhou and Chunlei Zhang, in their study “Examining Generative AI User Addiction from a C-A-C Perspective,” argue that addiction isn’t just about excessive use—it’s about the cognitive, emotional, and behavioral cycle that keeps users hooked. They identify three specific addiction pathways where “high engagement leads to dependence, emotional reliance, and decreased real-world socialization.”
Note: The cognition-affect-conation (C-A-C) perspective is a model that explains how an individual's thoughts (cognition), feelings (affect), and actions (conation) are interconnected and influence behavior.
The Consequences of AI Dependence
For many users, AI addiction manifests subtly—an unconscious shift from sporadic usage to daily reliance, which can result in reduced creativity, lower interpersonal engagement, and an increased likelihood of blind trust in AI-generated misinformation.
It can also erode our critical thinking skills: “Critical thinking is a muscle,” says Cheryl Einhorn, founder of the consultancy Decision Services and an adjunct professor at Cornell University (from Addicted To ChatGPT? Here’s How to Reclaim Your Brain). Jonathan Rothman’s article in the New Yorker, “Why Even Try if You Have A.I.?” paints a vivid picture of this:
We’re drawn to activities that invite us to grow, by trying and trying again, because we want to evolve as people. Life is mostly repetitive—wake, eat, work, sleep, repeat—and each day can feel like an unsatisfying circle. But repetition with variation broadens us. And yet, more and more, it’s becoming clear that artificial intelligence can relieve us of the burden of trying and trying again. A.I. systems make it trivially easy to take an existing thing and ask for a new iteration.
The study “Investigating Affective Use and Emotional Well-being on ChatGPT” adds an interesting paradox: “voice-based AI conversations can enhance well-being, yet prolonged engagement correlates with emotional dependence and decreased social interaction.” This is amplified in another recent study, “Can ChatGPT Be Addictive? A Call to Examine the Shift from Support to Dependence in AI Conversational Large Language Model”:
This paper explores how ChatGPT fosters dependency through key features such as personalised responses, emotional validation, and continuous engagement. By offering instant gratification and adaptive dialogue, ChatGPT may blur the line between AI and human interaction, creating pseudosocial bonds that can replace genuine human relationships. Additionally, its ability to streamline decision-making and boost productivity may lead to over-reliance, reducing users' critical thinking skills and contributing to compulsive usage patterns. These behavioural tendencies align with known features of addiction, such as increased tolerance and conflict with daily life priorities. This viewpoint paper highlights the need for further research into the psychological and social impacts of prolonged interaction with AI tools like ChatGPT.
Can AI Companies Prevent Addiction?
The researchers behind both studies urge AI developers to rethink engagement strategies. Should AI prioritize “healthy interaction patterns” over maximizing engagement? Can companies implement features to “mitigate compulsive use,” such as “timed breaks or transparency around AI behaviors?” The challenge lies in balancing positive AI experiences with ethical responsibility. An important element of the conflict is succinctly captured below:
“Then OpenAI dropped another big piece of news, that board member and former head of Facebook’s engagement loops and ad yields Fidji Simo would become their ‘uniquely qualified’ new CEO of Applications. I very much do not want her to take what she learned at Facebook about relentlessly shipping new products tuned by A/B testing and designed to maximize ad revenue and engagement, and apply it to OpenAI. That would be doubleplus ungood.” Zvi Mowshowitz from Don't Worry About the Vase
Good AI Habits and Hygiene
What makes AI addiction more insidious than its digital predecessors is not just its subtlety, but its intimacy. Social media demands attention; AI earns trust. It doesn’t just entertain or distract—it helps, it performs, it relieves us of burdens that once defined our intellectual independence. That relief, ironically, can erode the very muscles we most need to thrive in a world of complexity: judgment, creativity, confidence, and critical thinking.
Unlike the dopamine cycles of social platforms or the compulsions of gaming, AI dependency often masquerades as productivity. It feels like you're doing more, learning faster, thinking smarter. But over time, it may quietly diminish the very capacities it claims to enhance. You’re not just scrolling past life—you’re outsourcing it. This makes the problem harder to see, and harder still to solve.
The movement toward responsible AI has so far focused largely on fairness, transparency, and bias mitigation—urgent and important priorities. But as AI becomes embedded in our daily lives, responsibility must also include the design of systems that protect human agency, support autonomy, and avoid exploitative engagement loops. This means confronting the potential for AI to create not just dependence—but deference.
For organizations, particularly HR departments, this is an inflection point. The same tools that promise efficiency and insight must be introduced with policies that emphasize empowerment, not replacement. HR leaders can:
Promote AI literacy that includes ethical use, cognitive impacts, and personal boundaries;
Evaluate vendors not just on performance, but on how their tools support user control and transparency;
Champion cross-functional collaboration with IT and compliance to align technology with well-being;
Create space for reflection, feedback, and alternative paths—ensuring AI augments work, rather than automating judgment.
But ultimately, the final line of defense is personal. Just as we’ve learned to silence notifications, set screen time limits, or take digital detoxes, we’ll need new hygiene habits for this era. That means asking hard questions of ourselves: Am I using AI to extend my capabilities—or to escape discomfort? Am I checking my work, or ceding my voice?
It feels strange to imagine that, someday soon, we might need to start reminding ourselves to think. But that’s what artificial intelligence does—it thinks—and, in many contexts, promises to do the thinking for us. In a world saturated with technology, we already have to remind ourselves to put down our phones; to go outside; to see friends in person; to go places instead of staring at them on our screens; to have non-technological experiences, such as boredom. If we’re not careful, then our minds will do less as computers do more, and we will be diminished as a result. Why Even Try if You Have A.I.?
AI will keep getting smarter. The challenge for us is to stay self-aware. “Now that machines can think for us, we have to choose whether to be the passengers or pilots of our lives.”
AIX Factor Poll
AI Gone Rogue
Tales of AI being unintentionally funny (i.e., woefully wrong), bizarre, creepy, (amusingly) scary, and/or just plain scary.
I put ChatGPT, Gemini and Claude through the same job interview — here’s who got hired. Source: Tom’s Guide
With AI tools now smarter and more creative than ever, I wondered: could a chatbot survive a job interview and actually land a job?
Beyond assisting with tasks, I wondered how each one would interview if given the opportunity like a real candidate. From thinking creatively, responding to curveballs, and showing that it understands the role, I just had to know.
To test this, I created a fake job opening at a fictional tech media company and invited three of today’s top AI chatbots to apply: ChatGPT-4o, Claude 3.7 Sonnet and Gemini 2.0.
They each received the same prompts and interview questions across five rounds — from writing and data analysis to handling failure. I even gave them the opportunity to ask follow-up questions. Here’s how they did and who I would actually hire.
This is what AI ‘ghosts’ will be like: Working after our death, advising our grandchildren, or accidentally revealing an affair. A Google study analyzes the unexpected consequences of using this new technology to reincarnate oneself or loved ones. The section on foreseeable dangers is extensive. One clear example is the emotional dependence on a machine that represents someone who is no longer there. Although in the article, the authors use the word reincarnation to define it. “It’s primarily a metaphor,” Brubaker warns. “We use it to describe a generative ghost that acts by imitating a deceased person. It’s very easy to imagine an AI that imitates a person’s voice and mannerisms,” he adds. Source: Ars Technica
AIX-emplary Links
AI secretly helped write California bar exam, sparking uproar. A contractor used AI to create 23 out of the 171 scored multiple-choice questions. “I work in litigation as an economic expert. AI is absolutely having an impact on law firms. However, most contract disputes that go all the way to trial come down to the interpretation of five or fewer words in a hundred-page document. And, when you dig into why those words were chosen, it turns out that every other part of the document was crystal clear and not open to interpretation, but those five words were purposefully vague in order to get both sides to sign the contract. Such precision is going to require people. So, the work at the highest level hasn't changed much.”
“There is a huge place for AI in law, and it is already being used, but, in an industry where precision is the most important thing, and it is competitive in that there is almost always a winner and loser in each case that is being paid handsomely to point out the smallest of errors of its adversary, AI has to be carefully used, until the hallucination problem gets solved to something like one error in a billion tries.”
Defining tech in 2025 — AI agents. Artificial intelligence (AI) keeps evolving — and with agentic AI, the digital agents joining our ranks will soon take automation to the next level. Despite slow take-up in HR and mixed reports about the return on investment (ROI), global executives see AI as the top business priority and the biggest potential value driver in 2025. Agentic AI is transforming company systems, processes and the workforce, and HR leaders who don’t prepare will be left behind. Source: Mercer
Zendesk CEO: AI Is Already Hallucinating Less Than Humans Are Making Mistakes Source: CX Today
ChatGPT's hallucination problem is getting worse ACC ording to OpenAI's own tests. Source: MSN
Parsing How Winners Use AI. More than 90% of about 1,300 commercial executives surveyed by Bain have scaled up at least one AI use case. Those that have moved beyond pilots and superficial applications, expanding the number of use cases and integrating AI into core processes and customer interfaces, realize the greatest value. Source: Bain and Company
AI-generated code introducing risk: Cyber expert. Computer programming code generated with artificial intelligence as well as AI “hallucinations” are introducing new risk and exposures. Source: Business Insurance
Companies tend to hire based on vibes, not skills, study shows. Textio found that candidates who received an offer were 12 times more likely to be described as having a “great personality” than those who were rejected. Source: HR Dive
About
The AIX Files is a weekly newsletter providing news, perspectives, predictions, and provocations on the challenges of navigating the world of AI-augmented work. It’s a big topic and there’s a lot to cover. Our goal with this, the AIX Factor, and the broader AIX community is to promote - and, if necessary, provoke - illuminating conversations with a cross-section of business and technology leaders, as well as practitioners and people from diverse fields, on the ways AI intersects with leadership, culture, and learning. AIX was developed in association with HR.com.