Thinking AI: a Formula For Learning and Fairness
Our weekly round-up of news, perspectives, predictions, and provocations as we travel the world of AI-augmented work.
This Week’s AIX Factor: “Recognize Patterns, Iterate, Be Resilient”: a Formula for Learning in an AI-Augmented World
We know how vital learning and upskilling will continue to be as AI transforms work - the question is: why are soft skills so key, and how can we ensure that the resources we use for learning are accessible, fair, and effective? Ioanna Onasi, co-founder & CEO of Dextego, an autonomous sales coaching platform, can legitimately claim to be a grizzled entrepreneur before reaching thirty. Indeed, one of her missions is to share her insights to help fellow Gen Z and under-represented founders accelerate their entrepreneurial journeys - several of which she shares with us on this podcast. Along the way, Ioanna provided us with an exceptionally accurate and concise description of what it takes to be a successful founder/leader, premised as it is on continuous learning: “recognize patterns, iterate…be resilient.” We also spoke about the importance of learning and developing soft skills in the age of AI…with bonus restaurant tips for those who enjoy quality Greek food.
Next Week’s Guest: Mark Vickers, HR.com’s Chief Research Analyst & Data Wrangler. We ask him what a data wrangler does, discuss his research work, and dive into the findings of a presentation he recently gave titled “On AI's Bumpy Ascent in HR." Tons ‘o interesting findings and the kind of astute analysis you’d expect from a professional data wrangler. Don’t miss it!
Thinking AI: What’s Fair?
AIX’s Charles Epstein
“Albers taught the famous Bauhaus Vorkurs, or introductory course. Albers would walk into the room and deposit a pile of newspapers on the table and tell the students he would return in one hour. They were to turn the pieces of newspaper into works of art in the interim. When he returned, he would find Gothic castles made of newspaper, yachts made of newspaper, airplanes, busts, birds, train terminals, amazing things. But there would always be some student, a photographer or a glassblower, who would simply have taken a piece of newspaper and folded it once and propped it up like a tent and let it go at that. Albers would pick up the cathedral and the airplane and say: “These were meant to be made of stone or metal—not newspaper.” Then he would pick up the photographer’s absentminded tent and say: “But this!—this makes use of the soul of paper. Paper can fold without breaking. Paper has tensile strength, and a vast area can be supported by these two fine edges. This!—is a work of art in paper.” And every cortex in the room would spin out. So simple! So beautiful … It was as if light had been let into one’s dim brain for the first time.” From “Our House to Bauhaus,” Tom Wolfe.
Tom Wolfe would have had great fun with AI as with any emerging trend that caught his critical eye. In his famous essays and novels, he brought his signature blend of faux naivete and corrosive wit in exposing contemporary art, architecture, politics, business… everything!… as fleeting systems of groupthink, or just fads, what one critic referred to as “the herd of independent minds.” For instance, in The Painted Word, Wolfe argued that modern art had become less about the art itself and more about the critical theory behind it - only if you knew the critical theory could you properly evaluate the art. The value of art was no longer determined by what was hanging on walls but in the critical essays published in little-read but widely quoted journals.
I was thinking of this when I came across a fascinating conversation about AI and fairness in a podcast (link below) featuring Dr. Ravit Dotan, Founder and CEO of TechBetter, Responsible AI Advocate of Bria and AI Ethicist. On the pod, she discussed AI regulation, balancing governance with innovation, what the future holds, and more in an action-packed 30+ minutes. But it was the issue of AI and fairness that made me think of Wolfe. Art and fairness are both hard to define and quantify; indeed, achieving fairness - or “fairness” - is probably as much art as science. AI does many things far better than we do, but ensuring fairness involves decisions on our values, some of which are unaligned and nearly impossible to harmonize: can AI reduce us, our humanity, and everything that makes us tick, into a collection of data points? Or will there always be an elusive “ghost in the machine” that AI will never quite capture? As in Wolfe’s examples viz art and architecture, can it be that AI and fairness are as much - perhaps more - about feeling and believing than seeing and believing?
Dr. Dotan cited a paper on quantifying fairness in mortgage loan decisions. According to the study, in decisions made by humans, blacks were 54% less likely to have their loans approved vs. their non-black counterparts. When an off-the-shelf AI system was run on the data, the level of discrimination increased to…egads…67%! The data sets were blamed for causing the AI model to skew the results - "garbage in, garbage out" - but there’s more to the story.
This is not an isolated example - we’ve all heard of many instances of egregious bias in everything from hiring to performance reviews. A recent post on FairNow’s LinkedIn calls on organizations to test for bias before implementing an AI chatbot. In it, they refer to a paper published last October titled, The Unequal Opportunities of Large Language Models: Examining Demographic Biases in Job Recommendations by ChatGPT and LLaMA. The paper was based on an analysis of over 6,000 job recommendations by ChatGPT and LLaMA, revealing significant demographic biases such as steering Mexican workers toward lower-paying jobs and suggesting secretarial roles predominantly to women.
Dr. Dotan insists there’s more to redressing these ingrained biases than just enriching the data sets. To summarize her response: To quantify fairness, you must define what is fair, which is complicated. Companies already have certain values regarding equity and fairness, but they haven't figured out a way to measure them. The question then becomes: What outcomes are you measuring? What are the complicating factors, and what are the unintended consequences? In the study on mortgage loan fairness, they found that improving loan approval among blacks had a negative impact on the size of the loan.
Ultimately, fairness reflects how we think about ethics, inclusion, and other values that define us. Believing that bias is simply a data problem to be solved with better numbers, misses the bigger picture. Much like Albers’ students learning to see the “soul of paper,” those who work with AI must learn to see into their organization’s established values and principles that underlie the data and algorithms. Only then can we begin to create AI systems that truly reflect the diverse and complex realities of human life.
You can watch/listen to the entire interview here - the conversation happened six months ago, but it’s as relevant today as it was then. Highly recommended:
And speaking of the estimable Dr. Dotan, she recently integrated the EU AI Act and NIST AI RMF into one AI governance framework…a very valuable piece of work. Here’s a summary that I pulled from the TechBetter site:
Background
– The EU AI Act and the NIST AI RMF are two of the most influential and comprehensive AI governance standards in the world.
– The EU AI Act is the flagship AI regulation in the European Union. It recently came into force and will gradually become binding over the next couple of years.
– The NIST AI Risk Management Framework (RMF) was created by the US government and it has gained popularity and traction in the US government and in the AI industry.This framework
– This framework unifies these two powerful standards into a single, ready-to-use questionnaire.
– The framework can help organizations evaluate themselves or other organizations, such as vendors or portfolio companies.
AIX Files Poll
AI Gone Rogue
CHATGPT Went Rogue, Spoke in People’s Voices Without Their Permission. In its "system card," OpenAI describes its AI model's capability of creating "audio with a human-sounding synthetic voice." That ability could "facilitate harms such as an increase in fraud due to impersonation and may be harnessed to spread false information," the company noted. “OpenAI just leaked the plot of Black Mirror’s Next season.”
Something peculiar and slightly unexpected has happened: people have started forming relationships with AI systems. The idea that we’ll form bonds with AI companions is no longer just hypothetical. Chatbots with even more emotive voices, such as OpenAI’s GPT-4o, are likely to reel us in even deeper. During safety testing, OpenAI observed that users would use language that indicated they had formed connections with AI models, such as “This is our last day together.” The company itself admits that emotional reliance is one risk that might be heightened by its new voice-enabled chatbot.
‘Till Death - or System Failure - Do Us Part: Replika CEO: It's Fine for Lonely People to Marry Their AI Chatbots. "I think it’s alright as long as it’s making you happier in the long run."
AIX-emplary Links
EY exec: In three or four years, ‘we won’t even talk about AI’ Even as AI reshapes the hiring and skills landscape, the technology itself will eventually be embedded in all digital tools, says Ken Englund, who leads Ernst & Young's Americas Technology Growth sector. So workers need to learn now how to use it — or pay later.
Experiment to accumulate: How CHROs can build their AI knowledge. CHROs admit to limited knowledge of AI. Notwithstanding the benefits of HR owning AI projects, it’s unclear whether HR currently knows enough about it. Asked to rate their own personal knowledge of generative AI and its potential impact within HR, CHROs give themselves an average score of only about four on a 10-point scale.
At least 30% of GenAI projects to be abandoned by end of 2025: Gartner “It's important that organizations acknowledge the challenges in estimating the benefits from GenAI, as they vary on company, usage, case, as well as role.”
Ongoing Business Challenges with AI Adoption One of the most significant hurdles in AI adoption is the skills gap. The G-P survey reveals that less than 2 percent of executives believe their organization has the right personnel to implement and monitor AI effectively.
How Much Time and Energy Do We Waste Toggling Between Applications? According to a Harvard Business Review report, an average Fortune 500 worker switches between apps and tabs over 3,600 times a day.
About
The AIX Files is a weekly newsletter providing news, perspectives, predictions, and provocations on the challenges of navigating the world of AI-augmented work. It’s a big topic and there’s a lot to cover. Our goal with this, the AIX Factor, and the broader AIX community (in partnership with HR.com) is to promote - and, if necessary, provoke - illuminating conversations with a cross-section of business and technology leaders, as well as practitioners and people from diverse fields, on the ways AI intersects with leadership, culture, and learning.