How to Avoid Being Outshined by Those AI Upstarts and Getting the Credit You (Probably) Deserve
A weekly round-up of news, perspectives, predictions, and provocations on AI's impact on employee wellbeing, readiness and performance.
(All About Eve - the classic tale of an ambitious “ingenue” gradually eclipsing her mentor and stealing her recognition, much like those relentlessly efficient and hyper ambitious AI assistants that elbow their co-workers aside and get all the credit!)
Some time back, I moderated a panel discussion on the challenges of managing the "extended workforce"—essentially, guns for hire who, on balance, prefer flexibility and variety to the tedious certainty of a weekly paycheck. While several participants wanted to discuss the mechanics of hiring, retention, compensation, etc., I was more interested in the "collateral" issues: how does a company project a coherent culture when the workforce is so decentralized, does company culture even have any meaning for gig workers, how is leadership affected when the extended workforce moves up-stream and even C-level people are provisional?
But what I found most interesting is the question of recognition; we take it for granted that employees seek recognition for outstanding work, and that it's important for management to find ways of rewarding them with an approving gaze and metaphorical warm embrace. How do leaders and organizations recognize stand-out work when they lack visibility into the day-to-day interactions and contributions of these workers? Some of this work can be quantified, but not all of it - many contributions are subtle and supportive, and without regular physical oversight, much of it flies under the radar.
I then started wondering how the broad implementation of AI to handle tasks previously handled by people, or AI that assists people with specialized skills in their work, will affect how their work is recognized and supported? Which goes to the broader question: how can workplaces protect and promote workers' well-being while navigating AI-driven changes, what are the threats AI poses to psychological safety, and how to mitigate them.
When Those Attention-Hogging Machines Get the Credit
The erosion begins subtly. Sarah, a marketing manager with eight years of experience, watched her carefully crafted email campaigns get attributed to the team's new AI writing assistant. Marcus, a remote software developer, stopped contributing ideas in video calls after learning his company's AI could generate code faster than he could explain his approach. Elena, a graphic designer, began working unpaid overtime to "AI-proof" her designs, terrified that her creative instincts were no longer valued.
Recognition—that fundamental human need to be seen, valued, and appreciated for our contributions—becomes murky when artificial intelligence is enlisted as a member of the team. When a brilliant insight emerges from a human-AI collaboration, who deserves the praise? When efficiency improves, is it the worker's skill or the algorithm's optimization? This ambiguity wounds egos, undermines the esprit d’corps that drives teams, and strikes at the heart of what motivates people to excel.
Dennis Stolle, JD, PhD, Senior Director of Applied Psychology at the American Psychological Association, defines psychological safety as "a team culture where workers are comfortable expressing themselves and taking appropriate interpersonal risks." But how can workers feel safe taking risks when they're uncertain whether their contributions will be recognized as their own?
The recognition crisis manifests in multiple ways: seasoned professionals watching AI systems receive credit for their expertise, junior employees struggling to distinguish their value from their digital assistants, and managers unsure how to evaluate performance when human and machine contributions are intertwined. Workers begin to question not just their job security, but their professional identity itself.
Beyond Recognition: AI's Broader Assault on Psychological Safety
The threats extend far beyond recognition, creating a perfect storm of psychological vulnerabilities:
Competence Anxiety floods workplaces as employees like Dr. Patel, a radiologist, find themselves triple-checking diagnoses they'd made confidently for decades, paralyzed by doubt about whether their human expertise could compete with machine learning models. Workers who once felt mastery over their roles now second-guess fundamental skills, creating cycles of self-doubt that undermine both performance and well-being.
Voice Suppression emerges when human judgment clashes with algorithmic recommendations. Tony, a warehouse supervisor, stopped voicing safety concerns after AI-optimized scheduling overruled his ground-level insights three times in a row. When machines consistently override human input, workers learn to stay silent rather than risk appearing ignorant or obstructionist.
Isolation and Disconnection particularly plague remote workers. Rebecca, a freelance writer working from her kitchen table, began declining projects that mentioned AI collaboration, watching her income shrink as she retreated into isolation. The collaborative relationships that once provided support and validation dissolve when AI mediates more interactions.
Performative Perfection takes hold as workers like James, a customer service rep, began scripting every interaction after his AI-assisted responses were flagged as "less empathetic" than the algorithm's suggestions. The fear of being outperformed by AI drives people to suppress their authentic selves, creating workplace cultures of artificial precision rather than genuine human connection.
Identity Confusion strikes at the deepest level, as workers struggle to define their professional worth in an AI-augmented world. When machines can replicate many human capabilities, employees lose clarity about what makes them uniquely valuable, leading to existential questioning that extends far beyond the workplace.
Making it NOT All About Eve
"I'll admit I may have seen better days, but I'm still not to be had for the price of a cocktail, like a salted peanut." Margo Channing, highlighting her refusal to be underestimated, from All About Eve.
Who doesn't want to be recognized for stand-out work? Managers intuitively understand how much a simple "atta boy" can mean and the tangible good it produces. Recognition is a fundamental human need wired into our psychology and essential to our sense of self-worth. It validates our competence, confirms our belonging, and signals that our contributions matter. It's the psychological fuel that transforms routine tasks into meaningful work and isolated individuals into cohesive teams.
When workers sense their input or the work they produce isn’t being appreciated because their contributions blur with AI outputs, when praise flows to algorithms rather than people, the predictable result is withdrawal. Workers stop volunteering ideas, avoid challenging projects, and retreat into self-protective behaviors that prioritize safety over excellence. Collaboration fractures as workers become territorial, desperately maintaining visibility in AI-mediated environments. Honest communication becomes the final casualty as employees stop flagging problems and avoid admitting mistakes.
The path forward requires leaders who understand that, with human-AI collaborations, there is a need to be acutely mindful about attributing credit and celebrating the human judgment that guides AI tools, and honoring irreplaceable human qualities like creativity, empathy, and contextual wisdom. The future belongs to organizations and leaders that value their machines (“good machine, who’s a good machine?”) while making their people feel valued. It requires intentional investment in psychological safety, clear communication about human-AI roles, and recognition systems that celebrate both individual achievement and collaborative success. Fasten your seatbelts, it’s gonna be a bumpy ride.
“One universal truth: pausing your day to thank someone will always serve you well. No robot can do that.” Todd Horton, CEO of KangoHR, provider of manager training and employee recognition solutions.
The Podcast that Answers the Question: Can AI teach humans how to interact with other humans? (spoiler alert: it can). Cognitrainer uses AI to create a simulated training environment for students and mental health professionals, ensuring that they’re prepared for real-world clinical encounters. A fascinating conversation with the company’s two founders that we’re reposting for those who missed it.
AIX Files Poll
AI Gone Rogue
Tales of AI being unintentionally funny (i.e., woefully wrong), bizarre, creepy, (amusingly) scary, and/or just plain scary.
AI Gone Wrong: An Updated List of AI Errors, Mistakes, and Failures 2025. This is a great, instructive, and unsettling list. Here are two of the most recent items on the list:
During a crucial match in the Round of 16 at Wimbledon, a staff member mistakenly turned off the AI-powered line judge system. The result: the players had to replay a key point, costing one competitor the match. The incident, which led to an official apology, shows that even the most advanced technology can be undone by a simple, human blunder—prompting both amusement and frustration when the “all-seeing” AI gets switched off at exactly the wrong moment.
Xbox producer suggests laid-off employees should seek emotional support from ChatGPT. An executive producer at Xbox Games Studios faces significant backlash after insensitively suggesting in a LinkedIn post that employees made redundant in the company’s latest downsizing efforts should look to AI for help.
“I know these types of tools engender strong feelings in people,” Matt Turnbull said in a now-deleted post, “but I’d be remiss in not trying to offer the best advice I can under the circumstances.”
AIX-emplary Links
AI-Driven, Flexible Mental Health Platforms Go Global
Digital mental health apps such as Real are demonstrating measurable success in reducing depression symptoms for employees. These AI-augmented platforms scale support beyond traditional therapy, reaching underserved worker populations worldwide. Source: AxiosBipartisan Policy Center Spotlights Digital Mental Health Regulation
A high-profile event brought together policymakers and mental health leaders to weigh in on regulating AI chatbots and digital tools in workforce settings as these become mainstream for employee well-being. Source: Bipartisan Policy CenterAI Chatbots Transform Mental Health Support in Workplaces
AI-powered chatbots like Woebot and Wysa provide 24/7 support, using natural language processing to deliver cognitive behavioral therapy techniques and flagging individuals who may need additional help. Source: ForbesAI Tools Improve Early Detection of Burnout and Stress
Firms deploy AI to monitor digital communication and detect early signs of burnout or stress, aiming for timely interventions. Sentiment and emotion recognition tech play a growing role in prevention. Source: The Wall Street JournalPersonalized Mental Health Recommendations via AI
Corporations leverage AI to offer customized stress management, mindfulness routines, and therapy referrals tailored to the needs of each employee, improving both effectiveness and coverage. Source: ReutersGrowing Comfort With AI Mental Health Solutions Among Employees
Recent surveys show increasing employee trust in AI-based mental health support—over half report willingness to use digital tools for confidential support, with workplace adoption rising. Source: HR DiveCritical Conversations at ‘Safety 2025’ on Workforce Wellbeing Innovations
At the “Safety 2025” conference, industry leaders examined AI’s role in promoting psychological safety and supporting worker mental health amid regulatory and implementation challenges. Source: EHS TodayTherapists and AI: BACP Journal Sparks Debate
A leading counseling publication highlights the therapist community’s concerns and hopes regarding AI, with evidence suggesting that AI chatbots can meaningfully reduce symptoms for some users. Source: BACP (British Association for Counselling and Psychotherapy)Policy and Industry Focus on Privacy and Data Protection
As companies use AI to analyze workforce mental health data, privacy and ethics take center stage. Regulators and HR leaders stress robust standards to protect sensitive information. Source: SHRMAI Recognized for Responsible Human Collaboration by Newsweek
The Newsweek AI Impact Awards celebrated companies building AI that improves worker well-being, highlighting the importance of responsible, collaborative tech in mental health support. Source: Newsweek
About
Developed in partnership with HR.com, AIX is a multimedia knowledge and engagement platform for experts, leaders, and HR peers to exchange experiences and seek guidance on cultivating mentally resilient, emotionally intelligent, and professionally adaptable workforces in an AI-augmented world. AI will increasingly touch every corner of the employee experience—from hiring to training, from task management to team dynamics. Whether its impact is positive or harmful depends largely on how HR prepares for it. The AIX platform (The AIX Files, The AIX Factor podcast, and the AIXonHR.com community) will play an important role in promoting employee wellbeing, workplace culture, and organizational readiness, the critical success factors in the age of AI.