Rite Aid has been barred from using face recognition software for five years after the Federal Trade Commission (FTC) determined that the company’s “reckless use of facial surveillance systems” embarrassed consumers and placed their “sensitive information at risk.”
The FTC’s Order, which is subject to approval by the United States Bankruptcy Court after Rite Aid filed for Chapter 11 bankruptcy protection in October, also directs Rite Aid to delete any images collected as part of the rollout of its facial recognition system, as well as any products built from those images. To protect any personal data collected, the organization must also develop a strong data security program.
According to a 2020 Reuters investigation, the pharmacy giant surreptitiously installed face recognition systems in 200 U.S. shops over an eight-year period beginning in 2012, with “largely lower-income, non-white neighborhoods” acting as the technological testbed.
With the FTC’s increased attention on the abuse of biometric monitoring, Rite Aid found itself squarely in the eyes of the federal agency. Among the charges is that Rite Aid, in collaboration with two outsourced firms, constructed a “watchlist database” comprising photos of clients who the company said were involved in illegal conduct at one of its locations. These photographs, which were often of poor quality, were taken by CCTV or by workers’ cell phone cameras.
When a customer entered a store who allegedly matched an existing image in the store’s database, employees received an automatic alert instructing them to take action—and the majority of the time, this action was to “approach and identify,” which meant verifying the customer’s identity and asking them to leave. These “matches” were often false positives that led to staff mistakenly accusing consumers of misconduct, causing “embarrassment, harassment, and other harm,” according to the FTC.
“Employees, acting on false positive alerts, followed consumers around its stores, searched them, ordered them to leave, called the police to confront or remove consumers, and publicly accused them, sometimes in front of friends or family, of shoplifting or other wrongdoing,” according to the lawsuit.
Furthermore, the FTC said that Rite Aid neglected to educate consumers about the use of face recognition technology while also ordering staff not to share this information with customers.
Face-off
One of the most contentious aspects of the AI-powered surveillance age has emerged: facial recognition software. Cities have issued broad restrictions on the technology in recent years, while legislators have battled to control how police use it. Meanwhile, firms like Clearview AI have been slammed with lawsuits and penalties all around the globe for serious data privacy breaches using face recognition technology.
The FTC’s most recent findings on Rite Aid shed light on the inherent biases in AI systems. The FTC, for example, claims that Rite Aid failed to reduce risks to specific customers because of their race—its technology was “more likely to generate false positives in stores located in plurality-black and Asian communities than in plurality-white communities,” according to the results.
Furthermore, the FTC charged Rite Aid with failing to test or monitor the accuracy of its face recognition technology before or after implementation.
Rite Aid said in a news statement that it was “pleased to reach an agreement with the FTC,” but that it disagreed with the core of the claims.
“The allegations relate to a facial recognition technology pilot program that the company deployed in a limited number of stores,” Rite Aid said in a statement. “Rite Aid stopped using the technology in this small group of stores more than three years ago, before the FTC’s investigation regarding the company’s use of the technology began.”