Submit to Digest

FTC issues guidelines to minimize opportunities for bias in AI use


Realizing the potential consumer risks of artificial intelligence (“AI”) technology permeating many businesses today, the Federal Trade Commission (“FTC) issued guidelines last April for companies using automated decision-making algorithms in their platforms.

In crafting its guidelines, the FTC, which is tasked with protecting consumers and competition against unfair, deceptive, and anti-competitive business practices, drew from its long history of managing issues that arise from companies that use decision-making tools, especially in the field of credit underwriting.

Aside from the field of credit, the FTC mentions the potentially discriminatory outcomes of automated models in the field of “Health AI.” The FTC cited a research study published in Science which found that an algorithmic tool used in many hospitals gave higher scores to the white patients than to the black patients who were equally sick. As AI-powered healthcare technologies for patient monitoring, infection prevention, and vaccine development and distribution to address the COVID-19 pandemic are further developed, it will be crucial to see how companies will apply these FTC guidelines.

The use of AI tools for recruiting employees is another critical area to which the FTC guidelines would apply. The quintessential example is Amazon’s experimental AI hiring tool that reflected bias against women.

The five principles laid down by the FTC appear to champion consumer protection against deception, unfairness, and bias:

  1. Transparency in use of AI and collection of data: Companies should tell consumers how they are using AI and should avoid deceptive practices that may subject them to FTC enforcement action. Companies should also be transparent about how information is being collected. In particular, entities that collect information about eligibility for certain benefits, like credit, employment, insurance, or housing, may trigger duties under the Fair Credit Reporting Act (“FCRA”), including an “adverse action notice.” This notice puts the consumer on alert to review and correct provided information.

  1. Explainability in the decision-making process: If companies use AI to make decisions that deny value or assign risk scores to the consumer, the company must be able to explain what data was used, what factors and rankings were relevant in the case of risk scoring, and how the information was used to reach the result. If AI tools are used to adjust a deal made, such as behaviors that would be adverse to one’s credit, then the consumer must also be informed of these policies.

  1. Fairness of decisions: Companies should ensure that decisions do not discriminate against a protected class. In evaluating AI-powered platforms, the FTC considers both the input—what factors and proxies were considered in the model—as well as the outcomes. Thus, even if an AI tool appears to be neutral in data collection, if the results skew against a protected class, then the FTC would look into the reasons for the skew and for alternative solutions.

  1. Empirically sound data and models: The FTC reminds aggregators of information that they may be considered “consumer reporting agencies” or “furnishers,” and as such, are subject to the FCRA. Thus, in relation to the principle of transparency, if a company collects information that is used to determine consumers’ eligibility for benefits, they have obligations under the FCRA to keep accurate information, and the failure to do so may subject them to fines.

  1. Accountability: Finally, companies must be proactive in crafting security and accountability mechanisms against unauthorized use and bias. For example, companies that use voice recognition technology as an identification tool must create robust security measures to prevent unauthorized use of the platform. In the case of bias, the FTC encouraged firms to ask key questions: How representative is your data set? Does your data model account for biases? How accurate are your predictions based on big data? Does your reliance on big data raise ethical or fairness concerns? The FTC also suggested tapping third-party or independent observers to audit their AI-powered tools.

Aside from regulations regarding credit, health, and employment, it would be interesting to see how the FTC and other government agencies create further regulations to address concerns about AI products that involve claims management for insurance claims, facial recognition systems for immigration, or automated pricing tools in online marketplaces.