An Urgent Call To Address Racial Injustice in AI

Gina Costanza Johnson
8 min readJan 3, 2022

--

Researchers demand that those who work in AI consider racism, gender, and other structural inequalities explicitly.

FORMS OF AUTOMATION such as artificial intelligence increasingly inform decisions about who gets hired, arrested, or receives health care. Real-world examples surround us everywhere, demonstrating that technology can exclude, control or oppress people, and reinforce historical systems of inequality that predate AI.

Because of the severe harm AI imposes upon the marginalized, teams of sociologists and computer science researchers across the globe claim the developers of AI models should consider race more explicitly by leaning on critical race theory and intersectionality concepts.

To expound, Critical Race Theory is a method of examining the impact of race and power first developed by legal scholars in the 1970s that grew into an intellectual movement influencing fields including education, ethnic studies, and sociology. Intersectionality acknowledges that people from different backgrounds experience the world differently based on their race, gender, class, or other forms of identity.

The urgency of intersectionality | Kimberlé Crenshaw

One approach presented before the American Sociological Association earlier this year coins the term Algorithmic Reparation. In a paper published in Big Data & Society, the authors describe algorithmic reparation as combining intersectionality and reparative practices "with the goal of recognizing and rectifying structural inequality."

Reparative algorithms will prioritize protecting groups that have historically experienced discrimination and directing resources to marginalized communities that often lack the resources to fight powerful interests.

"Algorithms are animated by data, data comes from people, people make up society, and society is unequal," the paper reads. "Algorithms thus arc towards existing patterns of power and privilege, marginalization, and disadvantage."

~From, Big Data and Society

The Humanizing Machine Intelligence Project

Three authors from the Humanizing Machine Intelligence project, housed at Australian National University and Harvard's Berkman Klein Center for Internet & Society, argue that efforts to make machine learning more fair have fallen short because they assume that we live in a meritocratic society and put numerical measurements of fairness over equity and justice. The authors say reparative algorithms can help determine if an AI model should be deployed or dismantled. Other recent papers offer similar concerns about how researchers have interpreted algorithmic fairness until now.

The wider AI research community has taken note. The Fairness, Accountability, and Transparency conference recently said it would host a workshop focused on critiquing and rethinking fairness, accountability, and transparency in machine learning. The University of Michigan will host an algorithmic reparation workshop in September 2022.

Still, researchers acknowledge that making reparative algorithms a reality could be an uphill battle against institutional, legal, and social barriers similar to those faced by critical race theory in education and affirmative action practices in hiring.

Critical Race Theory

Critical race theory has become a hotly contested political issue as of late and is often wielded in a manner that has little to do with the theory itself. For example, this past Fall, as part of Virginia governor-elect Glenn Youngkin's campaign strategy, he attacked critical race theory by vowing to ban it and give parents greater control over what their children learn in public school. Youngkin also tapped into parents' residual anger about last year's school closures and some white voters' resistance to teaching students about America's long history of anti-Black racism.

Meanwhile,

in Tennessee, an anti-critical-race-theory law led to criticism of books written about the desegregation of US schools. In contrast, California Governor Gavin Newsom signed into law that "ethnic studies" be a high school graduation requirement by 2025." A recent study found that ethnic studies classes improved graduation and school attendance rates in San Francisco. At the same time, the 2020 Census found the US is more racially and ethnically diverse than ever. The share of Americans who identify as "white" has declined, and the share who identify as white and another racial group has increased.

Supporters of algorithmic reparation suggest taking lessons from curation professionals such as librarians, who have had to consider how to ethically collect data on students and decide what materials should be included on library shelves and systems, often to attest to a library's value with respect to student learning outcomes. They propose considering whether the performance of an AI model is deemed fair or reasonable but whether it shifts power.

Timnit Gebru

Timnit Gebru, former Google AI Researcher

These suggestions echo earlier recommendations by former Google AI researcher Timnit Gebru, who in 2019 published a paper encouraging machine learning practitioners to consider how archivists and library sciences dealt with issues involving ethics, inclusivity, and power. This publication definitively exposes racial and gender bias in facial analysis libraries and training datasets.

Consequentially, Gebru says Google fired her shortly thereafter in late 2020. A critical analysis in Venture Beat concluded that Google subjected Gebru to a pattern of abuse historically aimed at Black women in professional environments often referred to as "Misogynoir," a term coined by Moya Bailey, author of The Abuse and Misogynoir Playbook. This form of abuse has been used successfully by individuals and institutions to silence, shame, and erase Black women and their contributions for centuries. The Playbook's tactics, described in the accompanying diagram, are disbelief, dismissal, gaslighting, discrediting, revisionism, and erasure of Black women and their contributions.

Earlier this year, five US senators urged Google to hire an independent auditor to evaluate the impact of racism on Google's products and workplace. Google did not respond to the letter.

In 2019, four Google AI researchers argued the field of responsible and ethical AI needs critical race theory because most work in the field does not account for the socially constructed aspect of race or recognize the influence of history on data sets that are collected.

~Rashida Richardson, Advisor, White House Office of Science and Technology Policy

"We emphasize that data collection and annotation efforts must be grounded in the social and historical contexts of racial classification and racial category formation," the paper reads. "To oversimplify is to do violence, or even more, to reinscribe violence on communities that already experience structural violence."

Lead author Alex Hanna is one of the first sociologists hired by Google and the paper's lead author. She was a vocal critic of Google executives in the wake of Gebru's departure. Hanna says she appreciates that critical race theory centers race in conversations about what is fair or ethical and can help reveal historical patterns of oppression. Since then, Hanna co-authored and polished a paper that confronts how facial recognition technology reinforces constructs of gender and race that date back to colonialism.

Alex Hanna, Senior Research Scientist, Ethical AI at Google

In late 2020, Margaret Mitchell, who with Gebru led the Ethical AI team at Google, said the company was beginning to use critical race theory to help determine what's fair or ethical. Mitchell was also fired in 2020. A Google spokesperson says critical race theory is part of the review process for AI research.

In yet another paper set to be published next year, Rashida Richardson, an assistant professor of law and political science at Northeastern University, contends that you cannot think of AI in the US without acknowledging the influence of racial segregation. The legacy of laws and social norms to control, exclude, and otherwise oppress Black people is too influential. Richardson is also an adviser to the White House Office of Science and Technology Policy.

Rashida Richardson, Senior Policy Advisor for Data and Democracy at White House Office of Science and Technology Policy

For example, studies have found that algorithms used to screen apartment renters and mortgage applicants disproportionately disadvantage Black people. Richardson says it's essential to remember that federal housing policy explicitly required racial segregation until the passage of civil rights laws in the 1960s. The government also colluded with developers and homeowners to deny opportunities to people of color and keep racial groups segregated. She says segregation enabled "cartel-like behavior" among white people in homeowners associations, school boards, and unions. In turn, segregated housing practices compound problems or privileges related to education or generational wealth.

Historical patterns of segregation have poisoned the data on which many algorithms are built, Richardson says, such as classifying what's a "good" school or attitudes about policing Brown and Black neighborhoods.

"Racial segregation has played a central evolutionary role in the reproduction and amplification of racial stratification in data-driven technologies and applications. Racial segregation also constrains conceptualization of algorithmic bias problems and relevant interventions," she wrote. "When the impact of racial segregation is ignored, issues of racial inequality appear as naturally occurring phenomena, rather than byproducts of specific policies, practices, social norms, and behaviors."

As a solution, Richardson believes AI can benefit from adopting principles of transformative justice, such as including victims and impacted communities in conversations about how to build and design AI models and make repairing harm part of processes. Similarly, evaluations of AI audits and algorithmic impact assessments conducted in the past year conclude that legal frameworks for regulating AI typically fail to include the voices of communities impacted by algorithms.

Richardson's writing comes at a time when the White House is considering how to address how AI harms people. Elsewhere in Washington, DC, members of Congress are working on legislation that would require businesses to regularly report summaries of algorithm impact assessments to the Federal Trade Commission and create a registry of systems critical to human lives. A recent FTC announcement hints the agency will establish rules to regulate discriminatory algorithms in 2022.

Some local leaders aren't waiting for Congress or the FTC to act. Earlier this month, the District of Columbia attorney general introduced the Stop Discrimination by Algorithms Act that would require audits and outline rules for algorithms used in employment, housing, or credit.

"…come celebrate

with me that everyday

something has tried to kill me

and has failed."

- Lucille Clifton

--

--

Gina Costanza Johnson

Digital Media Change Agent | Digital Philanthropist | Digital Design Ethicist | Humane Technology Advocate