AG Racine to Testify Today Before DC Council on His Landmark Bill to Modernize Civil Rights Laws & Stop Discrimination in Automated Decision-Making Tools

Legislation Would Continue DC’s Role as a Leader in Passing & Enforcing Civil Rights Laws by Helping Stop Discrimination in Algorithms that Impact Homes Residents Purchase, Loans That Get Approved & Hiring

WASHINGTON, D.C. – Attorney General Karl A. Racine today will participate in a hearing before the Council of the District of Columbia to advocate that the Council pass his landmark legislation that would modernize civil rights laws by prohibiting discrimination through the use of automated decision-making tools, known as algorithms, that impact residents’ everyday lives.

The legislation would hold businesses accountable for preventing biases in their automated decision-making algorithms and require them to report and correct any bias that is detected. The bill would also increase transparency by requiring companies to inform consumers about what personal information they collect and how that information is used to make decisions.

Discrimination and bias in algorithms can impact the homes that residents can purchase, the loans that they are approved for, and the jobs that they are hired for. These effects are particularly consequential for residents from vulnerable communities. Read AG Racine’s Medium post about the link between algorithms and civil rights.

Below is the text of AG Racine’s opening statement, as prepared for delivery:

Statement of Karl A. Racine, Attorney General

Office of the Attorney General for the District of Columbia

As Prepared for Delivery

Before the Committee on Government Operations & Facilities

Councilmember, Robert White, Chairperson

Public Hearing

Bill 24-558 – Stop Discrimination by Algorithms Act of 2021

September 22, 2022


Thank you, Chairperson White, Councilmembers, and staff for holding today’s hearing on this pathbreaking digital civil rights legislation, “The Stop Discrimination by Algorithms Act of 2021.”

OAG has expertise in civil rights, consumer protection, and tech accountability

Discrimination and bias can change peoples’ lives—impacting the schools they can go to, the homes they can purchase, the loans they get approved, and the jobs they are hired for. Our country has taken critical steps to help prevent discrimination and support equity and fairness in in these areas, for example by passing laws like the landmark civil rights laws of the 1960s. Building on these federal laws, in the 1970s, the District passed the Human Rights Act, one of the strongest civil rights laws in the country. It outlaws discrimination based on 21 traits, including race, religion, national origin, sexual orientation, gender identity or expression, and disability.

But one of the unfulfilled promises of these civil rights laws is the prevention of discrimination through tools that could not have been predicted nearly fifty years ago: modern technologies like algorithms that many companies and institutions now use to make important decisions. These algorithms—tools that use machine learning and personal data to make predictions about people—can determine who to hire for a new job, how much interest to charge for a loan, and whether to approve a tenant for an apartment. Without laws in place to clearly address discrimination in these tools, they will continue to result in widespread but nearly invisible bias and discrimination against marginalized communities. That is why our legislation is needed—it will modernize our civil rights laws for the 21st century and ensure that discrimination isn’t allowed in any form.

At the Office of Attorney General (OAG), we are committed to enforcing the law to stop discrimination in the District. In 2019, our office established a robust civil rights enforcement practice to investigate and bring lawsuits to challenge discriminatory policies and practices. Our work has included taking action to stop discrimination in areas ranging from denials of fair housing accommodations to denials of services to residents east of the Anacostia River.

OAG has also led the nation in protecting consumers by scrutinizing new technology practices and reining in Big Tech giants. We have sued Amazon and Google for anti-trust violations, and we took Facebook to court for data privacy violations. On top of that, in the last year alone, our Office of Consumer Protection has handled more than 2,500 consumer complaints, returned more than $600,000 to consumers through mediation and more than $5 million through lawsuits, and levied nearly $5 million in penalties against large tech-driven companies like DoorDashGetARound, and Instacart.

These experiences have equipped us to recognize when we face a new civil rights frontier like the algorithmic discrimination challenge we now confront. Yes, algorithmic systems can expand possibilities for some, but, for many marginalized communities, they unfairly foreclose options for the future. This startling inequity requires us to adapt our laws for the digital age, which is why we are proposing action now, before it’s too late.

Algorithms can perpetuate hidden bias on a massive scale

People often assume that algorithmic decisions are more fair or accurate because they are driven by data and machine-learning. But that isn’t the case. Unfortunately, algorithmic decision-making systems are not always neutral. Instead, they can inherit bias or systemic discrimination that is baked into historical data or that results from a designer’s blind spots and then replicate it at a large scale. When this happens, automated decision algorithms can change lives for the worse and lock people—especially members of marginalized groups—out of important life opportunities.

For instance, housing advertisers on Facebook have targeted housing ads to renters and buyers based on race, religion, sex, and familial status. And tenant-screening companies use algorithms to generate automated tenant scoring reports for nine out of 10 landlords in the U.S., with some scoring reports making conclusory “accept” or “deny” recommendations with little information about how those determinations were made. Yet these scoring algorithms can incorrectly sweep in criminal or eviction records tied to people with similar names and are especially error-prone in Latino communities, which share a smaller set of unique surnames.

Lending algorithms have calculated higher interest rates for borrowers who attended Historically Black Colleges and Universities or Hispanic-Serving Institutions. And in the health care space, an algorithm used by many hospitals and insurers has suggested that healthier white patients should receive more services to manage their health conditions than sicker Black patients. Meanwhile, software that schedules doctors’ appointments disproportionately double-books Black patients, forcing them to sit in the waiting room longer and experience more hurried appointments than other patients.

Employment algorithms can filter applicants by how closely their resumés match a business’s current workers. After being trained on one workplace’s data, one such screening tool suggested that applicants who were named Jared and played lacrosse were the best candidates for the job. Several years ago, Amazon found its AI hiring software downgraded resumés that included the word “women” and candidates from all-women’s colleges. Other interview software uses video analysis that screens out applicants with disabilities.

These are just some of the many examples that scholars, advocates, and legal researchers have uncovered, and you have heard about many others today.

A digital civil rights solution is needed

These problems are unlikely to change without government intervention. That’s because, while some corporate actors are starting to take a closer look at their practices, there is currently no uniform requirement that any kind of bias testing be performed. And without uniform requirements, many companies will not do this critical work. In fact, there is an inherent misalignment of incentives when it comes to companies’ scrutinizing their algorithms for bias. Companies that design or use algorithms don’t always know what factors go into their decision-making processes. And right now, they have little reason to find out. Compounding the problem, it is not always clear to consumers when algorithms are in use or when they have been excluded from an opportunity because of some aspect of their identity. And even when consumers suspect bias in an automated process, they likely lack the technological expertise and access to the algorithm to prove what happened and why. Congressional lawmakers have put forward proposals to promote digital transparency, but none has gained traction yet, and the algorithmic space remains largely unregulated.

So, rather than asking individual residents to take on the near-impossible task of identifying and combatting digital discrimination one instance at a time, we have put forward a comprehensive, public civil rights solution to protect District residents. It sets standards that all companies must follow to ensure that their algorithmic systems are not perpetuating bias in the first place, and it recognizes the responsibility of the government to monitor for problems and remedy them when they arise.

The bill we propose today is an effort to create equity in the 21st century by ensuring that institutions have incentives to prevent automated discrimination and promote transparency about their processes. It was developed over the course of several years in consultation with civil rights and technology experts—including at the District’s own Georgetown University Law Center, federal lawmakers and regulators, and representatives from the business sector. Though it offers the country’s most comprehensive digital civil rights package to date, it is built on a foundation of principles common to many model algorithmic governance documents and frameworks under consideration in Congress and other state governments.

First, the legislation clarifies how the District’s civil rights law applies in the digital space by explicitly outlawing discrimination in targeted advertising and automated decision-making in core areas of life: education, employment, housing, and important services like health care and insurance. Second, the legislation would require companies to do work on the front end to ensure their algorithms are fair and to share information about this work with OAG in the form of annual bias audits. And third, the legislation would increase transparency for consumers by requiring companies to disclose when algorithms are in use and to offer a more robust explanation if an unfavorable decision—like denying a mortgage or charging a higher interest rate—is made and to explain how consumers can correct any misuse of data.

Together, these provisions implement commonsense guardrails to prevent some of the most pernicious harms of discrimination on an automated scale to promote a more equitable future for all of us. 

We encourage companies that use these algorithms to support this effort. We met with business sector representatives when drafting this legislation to ensure we incorporated their perspectives. These conversations prompted us to, for instance, reduce duplication of effort by allowing a bias audit submitted to another state or federal government to substitute for the report this legislation requires. We also ensured that the bill applies only to larger entities with at least $15 million in annual revenue or to companies processing a significant amount of data on District residents. This means that most small business should not be affected by this law. The standards we propose here should not be prohibitive for organizations that are following the District’s current civil rights laws. In fact, some of the businesses we spoke to are already undertaking algorithmic bias audits, and they welcome the competitive advantage that this early compliance will give them over entities that have not yet prioritized digital fairness.

Institutions that have yet to begin this work now have an opportunity to be part of the solution, rather fighting to retain the status quo. Sadly, today we’ve heard much of the latter. Many companies fought other civil rights advancements like the Americans with Disabilities Act, and ended up on the wrong side of history. Companies should heed those past mistakes and instead work with us to support this important civil rights bill.


For decades, the District has been a leader in passing and enforcing civil rights laws. We can continue that leadership—both locally and nationally—by enacting this legislation as a model for uniform digital civil rights standards. Considering the number of national businesses that do work here, this legislation will establish a baseline for how companies across the country root out biases in the algorithms they use. And there is no reason that other states should not seek to adopt this same model. In fact, we are proud to have more and more localities, states, and even the White House, joining us on this path already. Let’s continue to be the leaders we are.

My team and I would be happy to answer any questions you may have.