WASHINGTON, D.C. – Attorney General Karl A. Racine today introduced landmark legislation to strengthen civil rights protections for District residents and prohibit companies and institutions from using algorithms that produce biased or discriminatory results and lock individuals, especially members of vulnerable communities, out of critical opportunities, like jobs and housing.
The legislation – the first comprehensive bill of its type across the country – would hold businesses accountable for preventing biases in their automated decision-making algorithms and require them to report and correct any bias that is detected. The proposal would also increase transparency by requiring companies to inform consumers about what personal information they collect and how that information is used to make decisions. Discriminatory algorithms can impact individuals’ everyday lives, including the schools they can go to, the homes they can purchase, the loans they get approved, and the jobs they are hired for. To root out discrimination in automated decision-making tools, AG Racine’s legislation would make it illegal for businesses and organizations to use discriminatory algorithms in four key areas of life opportunities: education, employment, housing, and public accommodations and services including credit, health care, and insurance.
“Not surprisingly, algorithmic decision-making computer programs have been convincingly proven to replicate and, worse, exacerbate racial and other illegal bias in critical services that all residents of the United States require to function in our treasured capitalistic society,” said AG Racine. “That includes obtaining a mortgage, automobile financing, student loans, any application for credit, health care, assessments for admission to educational institutions from elementary school to the highest level of professional education, and other core points of access to opportunities to a better life. This so-called artificial intelligence is the engine of algorithms that are, in fact, far less smart than they are portrayed, and more discriminatory and unfair than big data wants you to know. Our legislation would end the myth of the intrinsic egalitarian nature of AI.”
The passage of the landmark civil rights laws in the 1960s provided a national legal framework for preventing discrimination in key aspects of life – the workplace, housing, voting, financial aid, federal benefits, and much more. Building on these landmark federal laws, in the 1970s, the District passed the Human Rights Act, one of the strongest civil rights laws in the country. And in 2019, the Office of the Attorney General (OAG) established its Civil Rights Section to investigate and challenge discriminatory patterns and policies that harm District residents.
The passage of the original civil rights legislation in the 1960s and 1970s have helped make progress to build a more equitable and inclusive society. But the lawmakers who passed them could not have predicted modern technologies. Today, discrimination increasingly results from institutions’ use of advanced algorithms to make important decisions.
Algorithms are automated tools that use machine learning and personal data to make predictions and decisions about individuals. Companies often use algorithms to help determine who to hire for a new job, how much interest to charge for a loan, whether to approve a tenant for an apartment, or when a patient should be referred for additional medical care. These systems are designed to recognize patterns and draw conclusions using existing data, and as a result, algorithms can inherit the biases of prior decision-makers. When companies rely on biased algorithms to make important decisions, they can unintentionally replicate existing inequalities and continue historical patterns of discrimination based on race, gender, sexual orientation, disability, and other traits. Problematically, this kind of bias often goes undetected, but it can have sweeping ramifications on individuals’ daily lives.
“Every one of us is evaluated by computers for some of the most important opportunities in our lives, such as whether we are an appropriate candidate for a job, whether we should be offered a loan, or whether we would be a good tenant in an apartment building,” said Professor Laura Moy, Director of the Communications & Technology Law Clinic at Georgetown Law. “Faceless algorithms often scrutinize us in relative secrecy, and sometimes their determinations deny us important opportunities, without anyone telling us why. We like to think of ourselves as a “land of opportunity,” but increasingly, automated processes may deprive us of opportunities—sometimes in a discriminatory manner. Residents of Washington, DC deserve to know whether and how tools like artificial intelligence are used to make extremely consequential decisions about them, to be protected against new forms of discrimination, and to be able to count on companies doing the necessary hard work to ensure their algorithms are fair. We greatly appreciate AG Racine’s leadership on this critical issue and for introducing this needed legislation.”
“I am thrilled to see legislation in the works that specifically addresses algorithmic accountability,” said Dr. Timnit Gebru, Founder of Distributed Artificial Intelligence Research Institute and former Google employee. “At this point it should be clear that multinational corporations will not self regulate. To the contrary, they push out people with the slightest criticism of their proliferation of harmful systems. Without laws requiring companies to assess the potential for discriminatory impact of their algorithms, what they do instead is eject people like me who attempt to do that internally, even though this was literally in my job description.”
See below for more statements from local and national organizations that support AG Racine’s legislation.
As of 2019, Black families in the United States had a median net worth of just $24,100 compared to white families’ median net worth of $188,200, and this gap is only getting wider. The wealth gap was created over the course of centuries of slavery, violent racism, and systemic exclusion. It persists because discrimination continues to cut Black people and other marginalized people off from access to opportunities. Today, that discrimination does not always result from conscious decisions of individuals, but may stem from algorithmic bias. Here are some examples of how algorithmic bias harms members of marginalized communities:
- At least 55% of human resources professionals in the U.S. use predictive algorithms in their hiring processes, from resume scanners to interview analysis to performance predictors. But algorithmic resume scanners prioritize male candidates, are inaccessible to applicants with disabilities, and may de-preference first-generation college graduates.
- Hospitals and insurers also use algorithms to help manage health care for about 200 million people in the United States. A study of one widely used hospital algorithm found that it systematically discriminated against Black patients, failing to refer them for care while referring white patients who were equally sick. Unfortunately, discrimination caused by algorithms often goes undetected.
- Tenant-screening companies, with a $1 billion market, use algorithms to generate automated tenant background reports for nine out of 10 landlords in the U.S. Tenant-matching algorithms are prone to errors and incorrectly include criminal or eviction records tied to people with similar names.
AG Racine’s Stop Discrimination by Algorithms Act would change District law to strengthen civil rights protections and protect marginalized communities from the harm caused by algorithmic bias by:
- Prohibiting companies and organizations from using algorithms that produce biased and unfair results: This proposed legislation would make it illegal for companies and organizations to use discriminatory algorithms to make decisions about key areas of life opportunity, including education, employment, housing, and public accommodations and services like credit, health care, and insurance.
- Requiring companies to audit their algorithms for discriminatory patterns: Companies would be required to perform an audit each year to ensure that algorithmic processing practices do not discriminate directly and to determine whether the results show a disparate impact on protected groups. Companies would also be required to document how their algorithms are built, how the algorithms make determinations, and all of the determinations made. Companies would be required to report audit results and any needed corrective steps to OAG.
- Increasing transparency for consumers: Companies would be required to make easy-to-understand disclosures to all consumers about their use of algorithms to reach decisions, what personal information they collect, and how their algorithms use it to reach decisions. If businesses or corporations make an unfavorable decision about an opportunity—like denying a housing application or charging a higher interest rate for a loan—based on an algorithmic determination, they must provide a more in-depth explanation. They must also give consumers an opportunity to submit corrections to prevent negative decisions based on inaccurate personal information
The legislation would set a civil penalty of up to $10,000 for each individual violation of the law.
A copy of the legislative transmittal letter is available here.
A copy of the bill text is available here.
Protecting Civil Rights
OAG’s Civil Rights Section, established in 2019, investigates and brings lawsuits to challenge discriminatory policies and practices that harm District residents. OAG has filed suit against Daro Management for unlawfully discriminating against low-income renters, and reached settlements with Evolve, LLC and Curtis Investment Group for similar claims, requiring the companies to pay up to $250,000 and $900,000 to the District, respectively. In July 2020, OAG announced a tranche of lawsuits against 16 real estate companies and professionals engaged in illegal source of income discrimination. The office has also worked with Apartments.com and Zillow to fight housing discrimination on their platforms. Additionally, OAG has reached a settlement with a home repair company that illegally refused to do business in certain District neighborhoods. Learn more about the District’s civil rights protections and how OAG is working to enforce them.
If you believe you have been a victim of discrimination, you may report it to OAG’s Civil Rights Section by:
- Submitting a civil rights tip online
- Calling (202) 727-3400
- E-mailing OAGCivilRights@dc.gov
- Mailing OAG, ATTN: Civil Rights Section at 400 6th Street NW, Suite 10100, Washington, D.C. 20001
OAG’s civil rights work complements the work of the Office of Human Rights (OHR), which is the primary District agency that investigates individual discrimination complaints. You can file a complaint with OHR at ohr.dc.gov or call 202-727-4559.
Below please find statements from many of the other organizations locally and nationally that support AG Racine’s legislation:
“Algorithmic models, like those used for credit scoring, advertising, insurance pricing, and many other aspects of our increasingly-digital lives, can be useful tools for analysis and the prediction of patterns. But these models can also result in discriminatory practices by keeping individuals from accessing jobs, housing, and other critical life opportunities based on protected categories like race, sex, or age. This is why the Washington Lawyers’ Committee supports The Stop Discrimination by Algorithms Act of 2021 (SDAA), a bill recently introduced by the DC Office of the Attorney General which will ban the discriminatory use of protected traits in algorithms as related to important life opportunities, require companies to be transparent about their use of algorithms and to audit their algorithms for discriminatory patterns, and provide a private right of action for violations of the SDAA. We thank the Office of the Attorney General for introducing this bill, and urge the D.C. Council to fully support it.”
- Mirela Missova, Counsel, Washington Lawyers' Committee
“Systemic racism, including conscious and unconscious biases, can't be allowed to hide behind an algorithm. We applaud Attorney General Racine for taking on this critical, timely issue, and look forward to working with the OAG and the Council to advance these protections for DC residents seeking fairness in the decisions that govern the decisions that affect them in the context of getting access to a loan or housing or other critical life-affecting opportunities.”
- Ariel Levinson-Waldman, Founding President and Director-Counsel, Tzedek DC
“The ERC is glad to see AG Racine introduce the Stop Discrimination by Algorithms Act. Algorithms and the people that develop and use them can be biased and discriminatory, even if unintentionally. Housing providers that use algorithms to determine people's access to housing must ensure that their algorithms are free from illegal discrimination. The Equal Rights Center supports the Office of Attorney General's efforts to ensure the fair housing rights of DC residents.”
- Susie McClannahan, Fair Housing Rights Program Manager, Equal Rights Center
“Artificial Intelligence systems and their algorithms are developed in ways that fail to consider existing racism, sexism, and other systemic inequities. The proliferation of AI used by private and government actors in key areas, including housing, employment, healthcare, education, and the criminal legal system, has resulted in invisible but real discrimination against people of color, women, people with disabilities, and other marginalized groups. The ACLU-DC supports the Stop Discrimination by Algorithms Act as an important step forward in ensuring that AI systems comply with civil rights laws and are accountable to the people they impact.”
- Nassim Moshiree, Policy Director, ACLU of the District of Columbia
“Racism is hard wired into all our systems and institutions in ways that hold back Black and brown folks when it comes to everything from education, to employment, to health, and wealth. And while it may seem like algorithms and technologies that automate decision-making might remove racial bias and discrimination, unchecked these systems can exacerbate systemic racism, and do so without our awareness. As an organization committed to dismantling racial and economic inequities in the District, DCFPI is incredibly grateful for the legislation that Attorney General Racine has put forward to take aim at this largely hidden problem. The Stop Discrimination by Algorithms Act bans discrimination in algorithms, puts in place checks to limit bias in critical areas of decision-making, and requires transparency when this type of automation is at play.”
- Erica Williams, Executive Director, DC Fiscal Policy Institute
“This legislation sends a message that in some of the most important areas of everyday life, DC will no longer allow technology to be used as a veil for unlawful discrimination against vulnerable and historically marginalized groups. Automated discrimination adds a new dimension to societal inequity and the law must recognize that if civil rights protections are to keep pace.”
- Cynthia Khoo, Associate, Center on Privacy & Technology at Georgetown Law
“We are at a precipice of regulating technology where we can limit future harms of algorithmic bias, privacy violations and civil rights infractions by implementing comprehensive legislative action. The Stop Discrimination by Algorithms Act is exactly the response from elected officials needed to establish the necessary protections against algorithmic discrimination. By prohibiting algorithmic use for making important life decisions like housing or employment and empowering DC residents to know when algorithms are being used, this bill embodies a future in tech regulation that is rooted in equity and justice. Color Of Change commends Attorney General Karl Racine and will continue to push for legislative regulation that ensures ethical conduct from Big Tech.”
- Rashad Robinson, President, Color Of Change
“We urgently need standards to guide increased scrutiny and accountability around algorithmic discrimination, and protect people from unfair treatment at the hands of algorithms. The proposed act by DC is an important step in this direction.”
- Dr. Cathy O'Neil, CEO, ORCAA and author of Weapons of Math Destruction
“The unfettered use of AI must be stopped. One of the most serious dangers of unregulated AI is the classifying, evaluating, and sorting of data about individuals and groups of people by unaccountable corporations. The Stop Discrimination by Algorithms Act (SDAA) will end this discrimination via algorithm. If enacted, it prohibits discrimination when denying someone important opportunities they may have in their lives, including in areas such as employment, housing, credit, and healthcare. Importantly, it also prohibits discrimination in access to information about these important life opportunities. The SDAA will be a landmark law protecting DC residents against digital discrimination.”
- Katharina Kopp, Deputy Director, Center for Digital Democracy
“Common Cause welcomes the Stop Discrimination by Algorithms Act and the bill’s banning of the use of algorithms to discriminate on the basis of protected characteristics with respect to critical life opportunities. The bill’s strong auditing and disclosure requirements would also help detect and mitigate the harmful impacts of algorithmic discrimination. In order for everyone to participate in our democracy, algorithms and automated decision-making must be embedded in equal opportunity. This legislation presents an important first step in holding companies accountable for deploying algorithms that perpetuate inequalities, disproportionately impacting marginalized communities.”
- Yosef Getachew, Media and Democracy Program Director, Common Cause
“OTI is proud to support this bill, which introduces critical safeguards against algorithmic discrimination, encourages evaluations of consequential algorithmic systems, and promotes user rights in the District of Columbia. We applaud Chairman Mendelson and the DC Office of the Attorney General for taking this important step.”
- Spandana Singh, Policy Analyst, Open Technology Institute
“Policymakers need to address the issues of algorithmic biases which is currently an opaque issue to the public. The Stop Discrimination by Algorithms Act is an important proposal to mitigate harm done by algorithms. We look forward to working with the sponsors of the bill to seek out the most effective ways to identify algorithmic biases and stop potential harms.”
- Nandita Sampath, Policy Analyst, Consumer Reports
“At many of the most important moments of our lives—when we’re applying for jobs, housing, loans, college—we are screened and scored by opaque algorithms, often without even knowing it. And the algorithms often reflect judgments that reinforce bias and inequities in our society. There is little transparency on how these algorithms work, and there is rarely accountability for discriminatory outcomes. The Stop Discrimination by Algorithms Act (SDAA) will establish crucial transparency measures to safeguard against discriminatory algorithms and require explanations to individuals after adverse decisions made by these tools. Crucially, the SDAA provides for strong enforcement via both the Attorney General's powers and a private right of action, allowing individuals to bring a claim to protect their rights. EPIC is proud to support the SDAA.”
- Caitriona Fitzgerald, Deputy Director, Electronic Privacy Information Center (EPIC)