Life isn’t fair … yet
Algorithms drive critical decisions about healthcare, criminal law, education and even our finances, but they are not immune to harmful bias. Now, Berkeley engineers have devised a way to design fairer algorithms, pushing data-driven decision-making closer to the promise of unbiased accuracy. Anil Aswani, professor of industrial engineering and operations research, and Matt Olfat (Ph.D.’20 IEOR) have demonstrated the first theoretically proven approach to fairness that can be applied across numerous groups, characteristics and traits.
Previously, creating less biased algorithms required limiting them to a single-use setting, such as income or age, or only accommodating up to two protected labels, like gender and race. In their new approach, the researchers used an optimization hierarchy — a sequence of optimization problems with an increasing number of constraints — for fair statistical decision problems. As a result, complex decisions often tainted by bias can now be proven to be fairer.
The team tested this new technique in a case study involving morphine dosing. After revealing that a biased algorithm was discriminating against women and low-income patients based on their insurance, the researchers trained an algorithm to provide a certifiably fair and automated dosing policy for all patients.
Learn more: New research from professor Anil Aswani creates more inclusive data-driven decision-making (Berkeley IEOR)