Skip To Main

Rep. Clarke, Sen. Wyden Lead Letter to Prevent AI Bias in COVID-19 Response

FOR IMMEDIATE RELEASE 

May 12, 2020

Media contacts: 

Washington, DC – Last night Congresswoman Yvette D. Clarke (NY-09) and Oregon Senator Ron Wyden led a letter to House and Senate Leadership urging that the next stimulus package include protections against federal funding of biased algorithms. 

The letter states: “Amid this lethal pandemic, our failure to enact safeguards against algorithmic bias in sensitive AI systems – such as those used to produce health care assessments and making lending determinations – is literally a matter of life and death.”  

The coronavirus crisis is accelerating the deployment of artificial intelligence (AI) across our society. In the coming months, it is inevitable that AI will play a key role in monitoring the spread of COVID-19 among individuals, predicting future outbreaks, and even allocating scarce health care resources. But although there are benefits of AI, without sufficient testing, embedded bias in these systems could perpetuate great harm and inequity. While AI comes to conclusions based on algorithms, the outputs can unintentionally reflect the biases of its programmers or the data sets used to train the systems.

The letter urges Leadership to include language in forthcoming stimulus legislation requiring:

  • Any health care provider receiving funding in the package to only deploy AI systems in medical decision-making contexts once it provides written assurances that bias tests have been performed; and
  • Any business with annual gross receipts of $50,000,000 or greater in 2019 receiving funding in the package to provide a statement that bias tests have been performed on any algorithms they use to automate or partially automate activities (such as employment and lending determinations) which have historically been impacted by discriminatory practices.

Even before COVID-19, there were examples of biased AI resulting in patients of color being offered less care than white patients. Two other frequently cited examples of AI systems perpetuating bias are employment screening applications which discriminate against elderly individuals and women, and loan origination systems which offer less favorable terms to people of color or fail to include them in loan opportunities altogether. 

Clarke said: “We are seeing the devastating and disproportionate impact COVID-19 has on communities of color. During such a critical time, we must ensure that the use of artificial intelligence in combating COVID-19 is not biased in providing resources to these vulnerable communities who most need it. To ensure protections for our Black and Brown brothers and sisters, Senator Wyden and I led a letter to House and Senate leadership urging that any federal dollars used for AI during coronavirus are vetted to protect against any algorithm bias.”  

Wyden said: “Artificial Intelligence should never be used in situations that affect everyday Americans unless it has been fully vetted to make sure it treats everyone fairly. That goes double when it comes to deciding how scarce resources related to the COVID-19 pandemic are allocated. I’m glad to be working with Rep. Clarke to tell Congressional leaders that COVID relief funds and medical resources shouldn’t depend on unvetted AI.”

The letter was also signed by Senator Edward J Markey (D-MA) and Representatives Don Beyer (VA-08), Tony Cardenas (CA- 29), Andre Carson (IN-7), Rep. Jesus G. “Chuy” Garcia (IL-04), Sheila Jackson Lee (TX-18), Pramila Jayapal (WA-7), Henry C. “Hank” Johnson (GA-04), Ted W. Lieu (CA-33), Seth Moulton (MA-06) and Mark Takano (CA-41).

The full letter is available here and below:

May 11, 2020

Dear Speaker Pelosi, Leader McCarthy, Leader McConnell, and Leader Schumer:

The coronavirus crisis is accelerating the deployment of artificial intelligence (AI) across our society. There may be beneficial uses of AI in the context of combatting COVID-19, but without sufficient testing, embedded bias in these systems could perpetuate great harm and inequity. While AI comes to conclusions based on algorithms, the outputs can unintentionally reflect the biases of its programmers or the data sets used to train the systems. Accordingly, it is imperative that any sensitive AI programs deployed or developed with federal dollars during this pandemic are vetted to guard against algorithmic bias.

In the coming months, it is inevitable that AI will play a key role in monitoring the spread of COVID-19 among individuals, predicting future outbreaks, and perhaps even allocating scarce health care resources. Already, algorithms are being used to identify high-risk patients. If the pandemic intensifies and hospitals experience shortages of ventilators or other key supplies, it is conceivable that such risk indexes could be used to prioritize care. Even before COVID-19, there were examples of biased AI resulting in patients of color being offered less care than white patients. Accordingly, amid this lethal pandemic, our failure to enact safeguards against algorithmic bias in sensitive AI systems – such as those used to produce health care assessments – is literally a matter of life and death.  

The economic consequences of COVID-19 also heighten the urgency of confronting algorithmic bias in other contexts. Two of the most frequently cited examples of AI systems perpetuating bias are employment screening applications which discriminate against elderly individuals and women, and loan origination systems which offer less favorable terms to people of color or fail to include them in loan opportunities altogether. While our country potentially faces the highest unemployment rate since the Great Depression and small businesses desperately seek capital, preventing automated discrimination in employment and lending is critical, and it falls upon us to ensure essential safeguards are in place.

We are not alone in these concerns. Leading advocacy organizations such as EqualAI have called for Congress to “mandate that recipients of stimulus funding that utilize AI for essential services and determinations provide confirmation that they’ve checked for bias against protected classes (gender, race, socio-economic class, etc.).” Meanwhile, a recent white paper from the National Security Commission on Artificial Intelligence recommends “ensur[ing] that federally funded computing tools created and fielded to mitigate the COVID-19 pandemic are developed with a sensitivity to and account for potential bias and, at a minimum, do not introduce additional unfairness into healthcare delivery and outcomes.”

Fortunately, these challenges are surmountable without creating inordinate work or obstacles. AI developers can begin to address unintentional bias in these systems by, at a minimum, simply vetting their code and conducting testing for the purpose of removing or reducing bias. Accordingly, we urge you to include language in forthcoming stimulus legislation requiring:

  • Any health care provider receiving funding in the package to only deploy AI systems in medical decision-making contexts once it provides written assurances that bias tests have been performed; and
  • Any business with annual gross receipts of $50,000,000 or greater in 2019 receiving funding in the package to provide a statement that bias tests have been performed on any algorithms they use to automate or partially automate activities (such as employment and lending determinations) which have historically been impacted by discriminatory practices.

In the context of COVID-19, AI can be a force for good. However, without meaningful oversight, AI-facilitated algorithmic bias could also exasperate the demographic and socioeconomic inequities of this pandemic. Only Congress can ensure the possibilities of AI are not overshadowed by its perils. Thank you for your consideration of this request

###

Yvette D. Clarke has been in Congress since 2007. She represents New York’s Ninth Congressional District, which includes Central and South Brooklyn. Clarke is Vice Chair of the Energy and Commerce Committee and is a member of the Homeland Security Committee.