Adair Morse, Global Virtual Seminar Series on Fintech
Date Aired: May 22, 2020
Soloman P. Lee Chair in Business Ethics, Associate Professor of Finance
Haas School of Business at the University of California at Berkeley
Haas-Bakar Fellow and Fellow
Berkeley Center for Law and Business
Adair Morse is Soloman P. Lee Chair in Business Ethics, Associate Professor of Finance at the Haas School of Business at the University of California at Berkeley, Haas-Bakar Fellow, and Fellow at the Berkeley Center for Law and Business. Her research covers the areas of household finance, impact investing, entrepreneurship, corruption, and asset management.
Despite the potential for machine learning and artificial intelligence to reduce face-to-face bias in decision-making, a growing chorus of scholars and policymakers have recently voiced concerns that if left unchecked, algorithmic decision-making can also lead to unintentional discrimination against members of historically marginalized groups. These concerns are being expressed through Congressional subpoenas, regulatory investigations, and an increasing number of algorithmic accountability bills pending in both state legislatures and Congress. To date, however, prominent efforts to define policies whereby an algorithm can be considered accountable have tended to focus on output-oriented policies and interventions that either may facilitate illegitimate discrimination or involve fairness corrections unlikely to be legally valid.
We provide a workable definition of algorithmic accountability that is rooted in the caselaw addressing statistical discrimination in the context of Title VII of the Civil Rights Act of 1964. Using instruction from the burden-shifting framework, codified to implement Title VII, we formulate a simple statistical test to apply to the design and review of the inputs used in any algorithmic decision-making processes. Application of the test, which we label the input accountability test, constitutes a legally viable, deployable tool that can prevent an algorithmic model from systematically penalizing members of protected groups who are otherwise qualified in a target characteristic of interest.