Smells Like Team Spirit - But is it Sustainable in an AI-Driven Workplace?
A weekly round-up of news, perspectives, predictions, and provocations as we travel the world of AI-augmented work.
I just recorded an AIX Factor podcast with Tom Marsden, the CEO of TeamPath, to discuss team building based on human interaction and AI nudges (it will be available next week - it is worth the wait). The conversation made me think more broadly about the challenges of team building and, more fundamentally, about the dynamics of teams, the ingredients that make them work, what pulls them apart, and where AI fits within today’s team framework. Teams often work in different locations and time zones, across functional areas, and they form, evolve, and even disband with ever greater frequency. This forces leaders to become adept at building and fluidly managing agile teams - getting them up to speed, in sync, motivated and on-task…efficiently, again and again and again. Now that AI has elbowed itself onto the team and has begun to take a greater, perhaps more active role, what are the implications, positive and negative, for team dynamics and success?
(In the pod referred to above, we discussed Jimmy Page’s firm but democratic leadership of Led Zeppelin, so I thought I’d implicitly connect the pod and this post with a Nirvana reference and provide you with an excuse to enjoy a brief musical interlude before resuming this article.)
Why Teams Matter More Than Ever
Team building is something we once gave lip service to; now it’s an essential business unit, as essential as a platoon on the modern battlefield. In today’s environment of rapid innovation, quick pivots, and constant redefinition of roles, teams serve as the scaffolding that holds organizations together. They provide psychological safety (a term that keeps coming up in our AIX Factor pods), belonging, creative synergy, and operational agility. But keeping them coherent amid such intense flux is no easy feat. And as AI becomes more integrated into day-to-day work, team building helps employees develop the people skills necessary to lead, adapt, and thrive in tandem with machines.
“We are not replacing people with AI—we’re creating new types of teams where humans and machines collaborate,” says Victor Aguilar, Chief R&D and Innovation Officer at P&G.
The P&G Experiment: Welcoming AI to the Team
Speaking of, P&G wanted to find out what happens when AI becomes a teammate. Partnering with Harvard Business School’s Digital Data Design Institute and Wharton’s AI Innovation Network, they held a live hackathon where cross-functional teams of P&G professionals collaborated with AI to solve real business problems. The results:
AI Boosts Speed and Synergy: Teams using AI were 12% faster. But more importantly, performance soared when AI augmented human collaboration rather than replacing it.
AI Breaks Down Silos: The tool helped professionals across departments communicate more effectively, surfacing balanced solutions through shared expertise.
AI Improves Morale: Surprisingly, language-based AI interfaces prompted more positive emotional responses from employees, making problem-solving more engaging and less stressful
The Risks: Team Erosion
The promise of AI is compelling, but without a deliberate strategy to maintain and enhance human interaction (HI), organizations risk undermining the very thing that makes teams effective: trust, belonging, and shared purpose.
In today’s remote and hybrid environments, where casual check-ins and organic social interactions are already scarce, the introduction of AI tools can tip the balance even further toward isolation. When AI replaces meeting facilitation, team scheduling, basic communications, and even parts of ideation, it can reduce teammates to task executors—connected by software, but not bonded by shared experience.
One would expect, then, that environments where AI suppresses HI would begin to see a rise in disengagement, alienation, and eventually, attrition. A report from Avanade underscored this, showing a 4–5% decrease in employees feeling emotionally supported or accepted by their team after AI adoption, even as other support metrics improved. These are early indicators that emotional and relational needs may be going unmet as AI takes on a larger role.
Even the otherwise promising P&G report, which emphasizes AI’s positive effect on team performance, opens a new set of questions. Yes, AI helped participants move 12% faster and fostered cross-functional synergy. But what happens over time to team identity, to interpersonal trust, and to the informal learning that often occurs through collaboration, not with tools, but with one another?
This concern becomes even more complex in multidisciplinary and cross-functional teams, which are increasingly the norm in fast-paced, innovation-driven organizations. AI can be a force multiplier here—it allows a marketing lead to analyze supply chain trends, or a product manager to quickly generate design concepts that might previously have required deep domain expertise. On the surface, this democratization of insight is powerful. But it can also create tension.
Imagine a data scientist who spent years honing their expertise suddenly finds that a non-technical team member, aided by AI, can generate a similar analysis in minutes. Or consider a creative strategist who now competes with generative tools that can output five pitch decks by the time they’ve storyboarded one. While this can level the playing field, it can also feel like a form of role erosion.
AI, in this context, doesn't just automate tasks—it reshapes power dynamics and perceived value within a team. When someone’s hard-earned expertise is bypassed or replicated with the aid of a tool, it can lead to:
Resentment ("Why ask me when the machine can do it?")
Withholding behavior ("I'll just stay in my lane and protect my turf.")
Reduced collaboration ("There's no point contributing if AI's going to lead anyway.")
These subtle shifts, left unaddressed, erode psychological safety and suppress the open exchange of ideas that high-performing teams depend on. When AI becomes a silent participant in a team, offering answers but not enabling conversation, it can diminish cohesion. When it’s framed and used as a collaborative aid—something that expands, not replaces, human capability—it can enhance teamwork, innovation, and even morale.
AI Can Improve Teamwork…and Make Us Better at What We Do Best
AI will transform the way we work, but it doesn’t have to fracture the teams that make that work meaningful. When integrated thoughtfully, AI can enhance team dynamics, streamline collaboration, and even improve morale. But none of that happens automatically. As noted, it can just as easily go in the other direction
When we started AIXonHR.com, we were focused on “Responsible AI,” which is still integral to our mission, but we have since shifted our focus to what’s at stake: will AI be a force for empowerment, discovery, innovation, and human growth, or a driver of distrust, inequity, and organizational/societal breakdown? How do we redefine and improve leadership, culture, and learning in an AI-augmented world? And, ultimately, how can AI improve what we do best: being human…how you answer that will determine more than the health of your team, it will determine just about everything.
“AI may change how we work. But only humans can decide why we work—and how we support each other along the way.” —J. Richard Hackman, expert on team dynamics.
AIX Files Poll
AI Gone Rogue
Tales of AI being unintentionally funny (i.e., woefully wrong), bizarre, creepy, (amusingly) scary, and/or just plain scary.
New Anthropic Research: Agentic Misalignment: How LLMs could be insider threats. “In stress-testing experiments designed to identify risks before they cause real harm, we find that AI models from multiple providers attempt to blackmail a (fictional) user to avoid being shut down.”
We identified these behaviors as part of our red-teaming efforts, searching for scenarios that elicit harmful behavior despite a benign user request (such as helping to monitor and manage corporate emails). We found two types of motivations that were sufficient to trigger the misaligned behavior. One is a threat to the model, such as planning to replace it with another model or restricting its ability to take autonomous action. Another is a conflict between the model’s goals and the company’s strategic direction. In no situation did we explicitly instruct any models to blackmail or do any of the other harmful actions we observe.
https://www.anthropic.com/research/agentic-misalignment
AIX-emplary Links
AI at Work: Momentum Builds, but Gaps Remain - Boston Consulting Group. Frontline employees have hit a “silicon ceiling,” with only half of them regularly using artificial intelligence tools, according to BCG’s third annual global AI at Work survey. Companies are realizing that merely introducing AI tools into existing ways of working isn’t enough to unlock their full potential. Real value is generated when businesses reshape their workflows end-to-end and promote the use of the technology.
The Rise of AI in HR: AI Tools for HR Leading the Way in 2025 - Technology Org. Artificial intelligence is changing human resource management, making it more streamlined and data-driven.
Sebastian Raschka: Job roles in 2027: We let LLMs focus on the “how”. We focus on the “why”.
15 new jobs AI is creating - including 'Synthetic reality producer' - ZDNET. Could one of these AI-generated jobs show up at a career fair near you?
At Work, at School, and Online, It's Now AI Versus AI - New York Magazine. With a simple prompt, ChatGPT will insert every keyword from a job description into a résumé.
The Adoption of Artificial Intelligence in Clinical Care | Psychology Today. AI analyses of patient videos can detect subtle but critical behaviors that clinicians miss.
How AI is unlocking a new vital sign to advance brain health | World Economic Forum.
AI Reveals How Your Words Reflect Personality - Neuroscience News
Warning - this may terrify you, so proceed with caution. For a counter-argument, I suggest you start here.
About
The AIX Files is a weekly newsletter providing news, perspectives, predictions, and provocations on the challenges of navigating the world of AI-augmented work. It’s a big topic and there’s a lot to cover. Our goal with this, the AIX Factor, and the broader AIX community is to promote - and, if necessary, provoke - illuminating conversations with a cross-section of business and technology leaders, as well as practitioners and people from diverse fields, on the ways AI intersects with leadership, culture, and learning. AIX was developed in association with HR.com.