Statement of Karl A. Racine, Attorney General, Office of the Attorney General for the District of Columbia
Before the Committee on Government Operations & Facilities
Councilmember, Robert White, Chairperson
Public Hearing
Bill 24-558 – Stop Discrimination by Algorithms Act of 2021
Introduction
Thank you, Chairperson White, Councilmembers, and staff for holding today’s hearing on this pathbreaking digital civil rights legislation, “The Stop Discrimination by Algorithms Act of 2021.”
OAG has expertise in civil rights, consumer protection, and tech accountability
Discrimination and bias can change peoples’ lives—impacting the schools they can go to, the homes they can purchase, the loans they get approved, and the jobs they are hired for. Our country has taken critical steps to help prevent discrimination and support equity and fairness in in these areas, for example by passing laws like the landmark civil rights laws of the 1960s. Building on these federal laws, in the 1970s, the District passed the Human Rights Act, one of the strongest civil rights laws in the country. It outlaws discrimination based on 21 traits, including race, religion, national origin, sexual orientation, gender identity or expression, and disability.
But one of the unfulfilled promises of these civil rights laws is the prevention of discrimination through tools that could not have been predicted nearly fifty years ago: modern technologies like algorithms that many companies and institutions now use to make important decisions. These algorithms—tools that use machine learning and personal data to make predictions about people—can determine who to hire for a new job, how much interest to charge for a loan, and whether to approve a tenant for an apartment. Without laws in place to clearly address discrimination in these tools, they will continue to result in widespread but nearly invisible bias and discrimination against marginalized communities. That is why our legislation is needed—it will modernize our civil rights laws for the 21st century and ensure that discrimination isn’t allowed in any form.
At the Office of Attorney General (OAG), we are committed to enforcing the law to stop discrimination in the District. In 2019, our office established a robust civil rights enforcement practice to investigate and bring lawsuits to challenge discriminatory policies and practices. Our work has included taking action to stop discrimination in areas ranging from denials of fair housing accommodations to denials of services to residents east of the Anacostia River.
OAG has also led the nation in protecting consumers by scrutinizing new technology practices and reining in Big Tech giants. We have sued Amazon and Google for anti-trust violations, and we took Facebook to court for data privacy violations. On top of that, in the last year alone, our Office of Consumer Protection has handled more than 2,500 consumer complaints, returned more than $600,000 to consumers through mediation and more than $5 million through lawsuits, and levied nearly $5 million in penalties against large tech-driven companies like DoorDash, GetARound, and Instacart.[1]
These experiences have equipped us to recognize when we face a new civil rights frontier like the algorithmic discrimination challenge we now confront. Yes, algorithmic systems can expand possibilities for some, but, for many marginalized communities, they unfairly foreclose options for the future. This startling inequity requires us to adapt our laws for the digital age, which is why we are proposing action now, before it’s too late.
Algorithms can perpetuate hidden bias on a massive scale
People often assume that algorithmic decisions are more fair or accurate because they are driven by data and machine-learning. But that isn’t the case. Unfortunately, algorithmic decision-making systems are not always neutral. Instead, they can inherit bias or systemic discrimination that is baked into historical data or that results from a designer’s blind spots and then replicate it at a large scale. When this happens, automated decision algorithms can change lives for the worse and lock people—especially members of marginalized groups—out of important life opportunities.
For instance, housing advertisers on Facebook have targeted housing ads to renters and buyers based on race, religion, sex, and familial status.[2] And tenant-screening companies use algorithms to generate automated tenant scoring reports for nine out of 10 landlords in the U.S.,[3] with some scoring reports making conclusory “accept” or “deny” recommendations with little information about how those determinations were made.[4] Yet these scoring algorithms can incorrectly sweep in criminal or eviction records tied to people with similar names and are especially error-prone in Latino communities, which share a smaller set of unique surnames.[5]
Lending algorithms have calculated higher interest rates for borrowers who attended Historically Black Colleges and Universities or Hispanic-Serving Institutions.[6] And in the health care space, an algorithm used by many hospitals and insurers has suggested that healthier white patients should receive more services to manage their health conditions than sicker Black patients.[7] Meanwhile, software that schedules doctors’ appointments disproportionately double-books Black patients, forcing them to sit in the waiting room longer and experience more hurried appointments than other patients.[8]
Employment algorithms can filter applicants by how closely their resumés match a business’s current workers. After being trained on one workplace’s data, one such screening tool suggested that applicants who were named Jared and played lacrosse were the best candidates for the job.[9] Several years ago, Amazon found its AI hiring software downgraded resumés that included the word “women” and candidates from all-women’s colleges.[10] Other interview software uses video analysis that screens out applicants with disabilities.[11]
These are just some of the many examples that scholars, advocates, and legal researchers have uncovered, and you have heard about many others today.
A digital civil rights solution is needed
These problems are unlikely to change without government intervention. That’s because, while some corporate actors are starting to take a closer look at their practices, there is currently no uniform requirement that any kind of bias testing be performed. And without uniform requirements, many companies will not do this critical work. In fact, there is an inherent misalignment of incentives when it comes to companies’ scrutinizing their algorithms for bias. Companies that design or use algorithms don’t always know what factors go into their decision-making processes. And right now, they have little reason to find out. Compounding the problem, it is not always clear to consumers when algorithms are in use or when they have been excluded from an opportunity because of some aspect their identity. And even when consumers suspect bias in an automated process, they likely lack the technological expertise and access to the algorithm to prove what happened and why. Congressional lawmakers have put forward proposals to promote digital transparency, but none has gained traction yet, and the algorithmic space remains largely unregulated.
So, rather than asking individual residents to take on the near-impossible task of identifying and combatting digital discrimination one instance at a time, we have put forward a comprehensive, public civil rights solution to protect District residents. It sets standards that all companies must follow to ensure that their algorithmic systems are not perpetuating bias in the first place, and it recognizes the responsibility of the government to monitor for problems and remedy them when they arise.
The bill we propose today is an effort to create equity in the 21st century by ensuring that institutions have incentives to prevent automated discrimination and promote transparency about their processes. It was developed over the course of several years in consultation with civil rights and technology experts—including at the District’s own Georgetown University Law Center, federal lawmakers and regulators, and representatives from the business sector. Though it offers the country’s most comprehensive digital civil rights package to date, it is built on a foundation of principles common to many model algorithmic governance documents and frameworks under consideration in Congress and other state governments.[12]
First, the legislation clarifies how the District’s civil rights law applies in the digital space by explicitly outlawing discrimination in targeted advertising and automated decision-making in core areas of life: education, employment, housing, and important services like health care and insurance.[13] Second, the legislation would require companies to do work on the front end to ensure their algorithms are fair and to share information about this work with OAG in the form of annual bias audits. And third, the legislation would increase transparency for consumers by requiring companies to disclose when algorithms are in use and to offer a more robust explanation if an unfavorable decision—like denying a mortgage or charging a higher interest rate—is made and to explain how consumers can correct any misuse of data.
Together, these provisions implement commonsense guardrails to prevent some of the most pernicious harms of discrimination on an automated scale to promote a more equitable future for all of us.
We encourage companies that use these algorithms to support this effort. We met with business sector representatives when drafting this legislation to ensure we incorporated their perspectives. These conversations prompted us to, for instance, reduce duplication of effort by allowing a bias audit submitted to another state or federal government to substitute for the report this legislation requires. We also ensured that the bill applies only to larger entities with at least $15 million in annual revenue or to companies processing a significant amount of data on District residents. This means that most small business should not be affected by this law.[14] The standards we propose here should not be prohibitive for organizations that are following the District’s current civil rights laws. In fact, some of the businesses we spoke to are already undertaking algorithmic bias audits, and they welcome the competitive advantage that this early compliance will give them over entities that have not yet prioritized digital fairness.
Institutions that have yet to begin this work now have an opportunity to be part of the solution, rather fighting to retain the status quo. Sadly, today we’ve heard much of the latter. Many companies fought other civil rights advancements like the Americans with Disabilities Act,[15] and ended up on the wrong side of history. Companies should heed those past mistakes and instead work with us to support this important civil rights bill.
Conclusion
For decades, the District has been a leader in passing and enforcing civil rights laws. We can continue that leadership—both locally and nationally—by enacting this legislation as a model for uniform digital civil rights standards. Considering the number of national businesses that do work here, this legislation will establish a baseline for how companies across the country root out biases in the algorithms they use. And there is no reason that other states should not seek to adopt this same model. In fact, we are proud to have more and more localities, states, and even the White House, joining us on this path already.[16] Let’s continue to be the leaders we are.
My team and I would be happy to answer any questions you may have.
--
[1] “AG Racine Releases New Report During Consumer Protection Week Showing That Consumer Complaints to OAG Reached the Highest Level Ever in 2021” (Mar. 7, 2022), https://oag.dc.gov/release/ag-racine-releases-new-report-during-consumer; “AG Racine Reaches $2.5 Million Agreement with DoorDash for Misrepresenting that Consumer Tips Would Go to Food Delivery Drivers” (Nov. 24, 2020), https://oag.dc.gov/release/ag-racine-reaches-25-million-agreement-doordash; “AG Racine Announces Car Sharing Company, Getaround, to Pay Nearly $1 Million in Unpaid Taxes & for Misrepresenting Benefits and Features of Platform” (July 23, 2021), https://oag.dc.gov/release/ag-racine-announces-car-sharing-company-getaround; “AG Racine Announces Instacart Must Pay $2.54 Million for Misrepresenting that Consumer Tips Would Go to Workers & Failing to Pay Sales Taxes” (Aug. 19. 2022), https://oag.dc.gov/release/ag-racine-announces-instacart-must-pay-254-million.
[2] Naomi Nix & Elizabeth Dwoskin, “Justice Department and Meta settle landmark housing discrimination case,” Wash. Post (June 21, 2022), https://www.washingtonpost.com/technology/2022/06/21/facebook-doj-discriminatory-housing-ads/; Marrian Zhou, “Facebook takes heat from HUD over allegedly discriminatory housing ads,” CNET (Aug. 17, 2018), https://www.cnet.com/news/facebook-takes-heat-from-hud-over-allegedly-discriminatory-housing-ads/.
[3] Lauren Kirchner & Matthew Goldstein, “How Automated Background Checks Freeze Out Renters,” N.Y. Times (May 28, 2020), https://www.nytimes.com/2020/05/28/business/renters-background-checks.html.
[4] See, e.g., Complaint at 13, District of Columbia v. Daro Realty, LLC (D.C. Super. Ct. Feb. 7, 2020), https://oag.dc.gov/sites/default/files/2020-02/DC-v-Daro-Infinity-Barry-Complaint.pdf.
[5] See, e.g., Lauren Kirchner & Matthew Goldstein, “How Automated Background Checks Freeze Out Renters,” N.Y. Times (May 28, 2020), https://www.nytimes.com/2020/05/28/business/renters-background-checks.html.
[6] Student Borrower Protection Center, Educational Redlining (2020), https://protectborrowers.org/wp-content/uploads/2020/02/Education-Redlining-Report.pdf.
[7] Ziad Obermeyer et al., Dissecting racial bias in an algorithm used to manage the health of populations, 366 Science 6464, 447-453 (2019), https://science.sciencemag.org/content/sci/366/6464/447.full.pdf.
[8] Mark Travers, “Medical Scheduling Software Makes Black Patients Wait Longer In Waiting Rooms Than White Patients,” Forbes (Dec. 3, 2019), https://www.forbes.com/sites/traversmark/2019/12/03/medical-scheduling-software-makes-black-patients-wait-longer-in-waiting-rooms-than-white-patients/?sh=7e3c63d8559e.
[9] Rebecca Heilweil, “Artificial intelligence will help determine if you get your next job,” Vox (Dec. 12, 2019), https://www.vox.com/recode/2019/12/12/20993665/artificial-intelligence-ai-job-screen.
[10] Nicol Turner Lee & Samantha Lai, “Why New York City is cracking down on AI in hiring,” Brookings (Dec. 20, 2021), https://www.brookings.edu/blog/techtank/2021/12/20/why-new-york-city-is-cracking-down-on-ai-in-hiring/.
[11] Drew Harwell, “A face-scanning algorithm increasingly decides whether you deserve the job,” Wash. Post (Nov. 9, 2019), https://www.washingtonpost.com/technology/2019/10/22/ai-hiring-face-scanning-algorithm-increasingly-decides-whether-you-deserve-job/.
[12] See, e.g., U.S. Ass’n for Computing Machinery, “Statement on Algorithmic Transparency and Accountability” (May 25, 2017), https://www.acm.org/binaries/content/assets/public-policy/2017_joint_statement_algorithms.pdf; see also Sigal Samuel, “10 things we should all demand from Big Tech right now,” Vox (May 29, 2019), https://www.vox.com/the-highlight/2019/5/22/18273284/ai-algorithmic-bill-of-rights-accountability-transparency-consent-bias (emphasizing the need for transparency, accountability, redress, and independent oversight); Algorithmic Accountability Act of 2022, S. 3572, 117th Cong. (2022); Algorithmic Justice and Online Platform Transparency Act, H.R. 3611, 117th Cong. (2021); Colo. Rev. Stat. § 10-3.1104.9 (2021); Council of City of N.Y. Intro No. 1984-2020-A, proposing amendment to Administrative Code § 1-5 [25] (Dec.11, 2021); Artificial Intelligence Profiling Act, H. 2644, 66th Leg. (Wash. 2020).
[13] The European Union, another leader in combatting algorithmic discrimination, has created a name for technologies that influence these life opportunities: It calls them “high-risk algorithms” in recognition of how critical automated discrimination in these areas can be. Khari Johnson, “The Fight to Define When AI Is ‘High Risk’,” Wired (Sept. 1, 2021), https://www.wired.com/story/fight-to-define-when-ai-is-high-risk; Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonized Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM (2021) 206 final (Apr. 21, 2021).
[14] The legislation exempts most entities with annual revenues less than $15 million, which is aligned with the revenue threshold for local Small Business Enterprises. The Department of Small and Local Business Development categorizes Small Business Enterprises as those that earn between $5 million and $300 million depending on the industry, with the median threshold centering at $19 million. District of Columbia, Department of Small and Local Business Development, CBE Certification - Frequently Asked Questions (FAQs), https://dslbd.dc.gov/page/cbe-certification-frequently-asked-questions-faqs (last visited Sept. 18, 2022).
[15] Edward Berkowitz, “George Bush and the Americans with Disabilities Act,” Social Welfare History Project (2017), https://socialwelfare.library.vcu.edu/issues/disability/george-bush-and-the-americans-with-disabilities-act/ (last visited Sept. 18, 2022); Reid Davenport, Powerful Interests Oppose Strengthening of Disabilities Law (Mar. 6, 2013), https://www.opensecrets.org/news/2013/03/powerful-interests-oppose-strengthening-of-disabilities-law/.
[16] Sharon Goldman, “AI regulation: A state-by-state roundup of AI bills,” Venture Beat (Aug. 8, 2022) https://venturebeat.com/ai/ai-regulation-a-state-by-state-roundup-of-ai-bills/; The White House, “Readout of White House Listening Session on Tech Platform Accountability” (Sept. 8, 2022), https://www.whitehouse.gov/briefing-room/statements-releases/2022/09/08/readout-of-white-house-listening-session-on-tech-platform-accountability/.