05/01/2026 | News release | Distributed by Public on 05/01/2026 22:46
A paradox hovers over our increasingly AI-dependent world. On the one hand, artificial intelligence can make the world a better place (or so we're told). On the other hand, algorithms have no imagination or consciousness, and thus can know only the status quo-as reflected in the data they are trained on. And our current world is far from perfectly meritocratic or fair.
Jingyuan Yang. Photo by George Mason University.Jingyuan Yang, assistant professor of information systems and operations management at Costello College of Business at George Mason University, suggests that the paradox is compounded by conventional thinking around AI. "The standard view is that fairness is a tax on efficiency. The way conventional systems are structured, fairness checks are added almost as an afterthought that is assumed to negatively impact system performance," she says.
Is the "better," optimized world of AI destined to replicate, or perhaps even exacerbate, existing inequalities? Yang's ongoing research-in collaboration with Pengzhan Guo of Duke Kunshan University and Keli Xiao of Stony Brook University-points to an appealing alternative. It uses AI systems as a proving ground for a theorized "fairness-performance complementarity"-the idea that, under certain conditions, fairness and performance reinforce one another.
"Our 'fairness-by-design' framework utilizes reinforcement learning, which is a type of machine learning (ML). But unlike most machine learning algorithms, ours includes multiple agents competing for finite resources in a dynamic environment, not a static one," Yang says. "That makes our paradigm much more structurally similar to many real-world environments in which various people compete over time for finite resources."
Fairness was integrated in two stages. First, the framework was designed to "nudge" high-performing agents towards exploratory choices that might maximize their rewards. As Yang explains, "In this framework, high-performing agents are held in an exploratory mode for longer, while lower-performing agents settle into stable paths sooner." Second, options that were abandoned as a result of agents' reward-seeking behavior were redistributed, with lower-performing agents getting first crack at the best opportunities.
As Yang summarizes, "The exploratory activity of the high performers releases opportunities that the system channels down toward the weaker performers. Theoretically, this increases fairness while retaining individual choice and without constraining performance."
"Our 'fairness-by-design' framework utilizes reinforcement learning, which is a type of machine learning (ML). But unlike most machine learning algorithms, ours includes multiple agents competing for finite resources in a dynamic environment, not a static one. That makes our paradigm much more structurally similar to many real-world environments in which various people compete over time for finite resources."
-Jingyuan Yang, assistant professor of information systems and operations management at Costello College of Business at George Mason University
To test out the framework, the researchers used a data-set comprising detailed information on the job histories of 6.5 million professionals across a 20-year timeframe. "In the real-world data, we see a high degree of disparity, without very much redistribution of elite opportunities from relatively advantaged to disadvantaged employees," Yang says.
The algorithm converted the real-world job information into opportunities offered to hypothetical agents. The resulting career paths were analyzed in terms of both performance and fairness. Performance was defined by aggregate rewards earned by all agents across all periods. Fairness was defined by the degree to which initial performance disparities were resolved over successive decisions.
The "fairness-by-design" framework's results-for both fairness and performance-were better than those of eight alternative ML methods drawn from three different methodological families.
The researchers also adjusted the system to account for people's changing preferences. Early-career professionals tend to value employer reputation and advancement potential; in late career, rewards pertaining to job stability and security are more salient. Even with these restrictions implemented, the framework functioned as intended-improving the average quality of overall career paths while fueling upward mobility.
In a follow-up study utilizing the New York Yellow Taxi Trip record database, the framework was tasked with generating route recommendations to hypothetical "agents," i.e. cab drivers, with varying performance records. In this domain, the choice-set was much smaller (263 locations, as compared to 4,282 companies), and the timeframe far shorter (two hours as opposed to 20 years). As with the career-planning example, the taxi study found that more equitable distribution of high-quality routes led to higher average income per minute for the system as a whole.
"Because the framework proved adaptable to different domains and agent preferences, we think it could be used in future as a governance mechanism for a variety of AI contexts," Yang says. Health care scheduling, course registration in higher education and provision of digital services are a few areas Yang sees as likely candidates.
While emphasizing that her research is still ongoing, she argues that it poses a serious challenge to standard ways of thinking about AI. "Our formal proof establishes the conditions under which fairness and performance reinforce each other, and our experiments show those conditions are achievable in realistic settings. That gives our work both theoretical and experimental grounding," Yang says.