On
This action marks an important moment in AI regulatory history, as it is the first time the
This enforcement decision also serves as a warning and a guidepost for companies using and developing AI systems, especially if those AI systems use facial recognition technology in order to identify individuals and make automated decisions about them that could result in potential harm. The
In this post, we summarize the
The Complaint
The complaint details how
The
1. Unfair facial recognition technology practices
The
-
Assess, monitor, or mitigate any risks of incorrectly identifying consumers, with such errors occurring more frequently based on race or gender. As a result, Black, Asian, Latinx, and women consumers were at higher risk of being misidentified and generating a false positive by
- Reasonably assess or even inquire about the accuracy of the FRT before deploying the technology. In fact, both of its vendors' contracts expressly disclaimed any warranty as to the accuracy or reliability of the results.
- Enforce image quality controls, increasing the likelihood of false-positive match alerts. The
FTC alleges thatRite Aid's image quality policies demonstrated that it understood how poor image quality could lead to false alerts, and yetRite Aid did not place any controls or oversight to ensure that enrollment photos complied with the policy. - Appropriately train or oversee store employees responsible for operating the FRT, including how to interpret and act on the match alerts. The available training materials also did not adequately address the possibility of false-positive matches.
Regularly monitor or test the accuracy of the technology after deployment. According to the complaint,Rite Aid inadequately examined the accuracy of match alerts, documented outcomes, monitored the frequency of false-positive matches, and addressed issues with problematic enrollments. - Conduct periodic security assessments of its service providers to ensure that they continued to meet the company's security standards.
- Contractually require their service providers to establish appropriate safeguards for the personal information shared by
Rite Aid . - how to visually compare the images (as the human operator) to see if their assessment agrees with the AI's outcome, and
- how to understand the effects of bias that might be inherent in the data and model.
2. Unfair failure to implement or maintain a comprehensive information security program mandated by a previous FTC Order in 2010
The
-
Use reasonable steps to assess and select service providers who could meet the security standards for the personal information shared by
The Stipulated Order
The order contains 16 stipulations that
1. Cannot use any Facial Recognition or Analysis System (defined broadly as "an automated biometric security or surveillance system that analyzes or uses depictions or images, descriptions, recordings, copies, measurements, or geometry of or related to an individual's face to generate an output") for 5 years.
2. Must delete covered biometric information and destroy any AI models or algorithms derived from that information (also called model disgorgement).
3. Must establish and implement an Automated Biometric Security or Surveillance System Monitoring Program if
4. Must establish and implement procedures to provide consumers with notice and a means for submitting complaints related to the outputs of the AI system if
5. Must have retention limits for its biometric data prior to implementing any Automated Biometric Security or Surveillance System.
6. Must post clear and conspicuous notices at the retail locations and online platforms disclosing the company's use of any Automated Biometric Security or Surveillance System that uses biometric information collected from consumers. The notices must contain information about:
a. The specific types of biometric information collected,
b. The types of outputs generated by the AI,
c. All purposes for which
d. The timeframe for deletion of each type of biometric information used, as established in the (also mandated) data retention policies.
7. Cannot misrepresent its compliance with these orders.
8. Must establish and maintain a comprehensive information security program for vendors that protects the personal information shared by
9 & 10. Periodically undergo a security assessment by an independent third party, report the results to the
Key Takeaways
In an accompanying statement to the enforcement action, Commissioner
Companies looking to ensure compliance and avoid attention from regulators for consumer-facing AI systems—especially those engaging with high-risk applications of AI that use personal and/or biometric information to make automated decisions about an individual—should look at this order for guidance. It highlights some of the
Consider the what, how, and where—that is, the context—when deploying AI. As the
Conducting regular risk assessments is important for high risk AI that produces outputs that could potentially harm consumers. Provision 3 in the stipulated order can serve as a compliance checklist for companies looking to implement "automated biometric security or surveillance systems" or similar AI technology that makes automated decisions about consumers. In particular, the
Ensure that your company has a strong training program for employees operating high risk AI systems. Employees tasked with operating and/or acting on outputs from AI, like
-
how to evaluate the quality of the images enrolled into the model,
Companies should be transparent about any use of AI that involves consumer personal information, especially biometric data. Provision 4 in the order explains what details should be in a consumer notice and how
Companies need to scrutinize contracts with vendors handling biometric data or other personal information. The
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
Mr
DC 20006
Tel: 6175266000
Fax: 6175265000
E-mail: laura.bulcher@wilmerhale.com
URL: www.wilmerhale.com
© Mondaq Ltd, 2024 - Tel. +44 (0)20 8544 8300 - http://www.mondaq.com, source