Decoding distinctions in biometrics

metamorworks/Getty Images

Though some developers describe facial recognition and analysis as “completely different technologies,” privacy advocates argue they remain the same in the kinds of dangers they pose.

What does it mean when a computer wants to look at your face? 

It depends. “Facial recognition” and “facial analysis” are two taxonomically similar terms, both referring to the process by which a series of algorithms register and make decisions about a given human face. But privacy advocates, regulators and software companies themselves have a lot riding on the distinctions between different types of facial scanning softwares

Earlier this month, the Federal Trade Commission’s decision to rejected an application from three software companies leveraging identity authentication technology, namely through the processing of facial details to verify a user’s age as a parental control tool. 

The rejection of the companies’ facial age estimation technology came following public comments on the technology’s ability to safeguard a child user’s identity. Notably, the FTC said that additional information is expected for “the Commission and the public in better understanding age verification technologies and the application” –– despite approving a similar tool in 2015.

Privacy advocates swiftly celebrated the decision.  

“Based on a long history of error-ridden facial recognition technologies disproportionately harming people who aren’t affluent, white, and male, we applaud the FTC’s rejection of this facial recognition technology marketed as a tool to verify age,” Fight for the Future Campaigns and Communications Director Lia Holland told Nextgov/FCW in early April. 

One of the applicant companies, Yoti, disagreed with the characterization of their software as “facial recognition technologies.” Julie Dawson, the chief regulatory and policy officer at Yoti, said the difference between the face-scanning systems stems from whether or not a software recognizes an individual as that specific individual. 

“If you do an age estimation check over and over again, the facial age estimation technology does not say to itself ‘I have seen this face before and I can use that information to estimate the age on this image,’” Dawson told Nextgov/FCW. “All the facial analysis model knows how to do is estimate age. It detects a live face and then analyses it to assess age.”

She added that Yoti’s model does not identify a person, but rather leverages deep learning neural networks trained to recognize an individual characteristic from images. This distinction in how digital networks process and analyze data inputs to produce outputs determines if a facial scanning technology is recognition or analysis. Dawson said facial recognition systems further group images to a specific identity based on biometric data, whereas facial analysis systems group based on a characteristic, such as age, computed from processing biometric data. 

Critically, Dawson says Yoti’s systems never learn the identity of a given user. 

“It is important for regulators, businesses and the wider public to understand the difference between the two, as they are completely different technologies,” Dawson said. 

We're kind of in the Wild West and we've got a bunch of corporate pinky promises.
Adam Schwartz, attorney, Electronic Frontier Foundation

Other industry experts disagree. Electronic Frontier Foundation attorney Adam Schwartz said that both systems still raise serious privacy and ethics concerns.

“We think that face analysis without face matching is still a threat to our civil rights and civil liberties,” he told Nextgov/FCW. 

Schwartz said that several ongoing issues with the larger identity mapping industry –– including the error-prone nature of the technology, its potential for societal harms, and the lack of legal enforcement to mandate software companies safely dispose of collected biometric data –– all raise questions about the inherent security of deploying facial scanning systems regardless of their nomenclature.

“The suggestion that, ‘don't worry, we're just analyzing the face for demographics. We're not matching the face to figure out who this person is, so leave us alone,’ that rings very hollow. The face analysis without the face matching is a threat to privacy and other civil liberties,” he said.

“The laws right now, unfortunately, are quite limited.”

The mismatch between technological innovation and enforceable legal regulations is nothing new. In the case of facial recognition versus facial analysis, Cobun Zweifel-Keegan, the managing director of the International Association of Privacy Professionals, said that even the term “biometric” can be disputed.

“There's an ongoing question of what counts as a biometric and I think that is definitely not settled,” said Zweifel-Keegan. “People might say it’s settled from a technical perspective, but it's definitely not settled as a matter of law, and we see regulators expanding definitions of biometrics to include things that don't have the use component, where anything that is potentially a biometric should be subjected to heightened protections.”

Dawson told Nextgov/FCW that Yoti refrains from saying its products are biometric technology, given the shifting definition of biometrics in different jurisdictions. 

From the technological side, Patrick Grother, a staff scientist with the National Institute of Standards in Technology, said that the distinction between face recognition and face analysis comes down to the very math executed by a system’s algorithms and whether or not it stores the necessary data. 

He said that facial recognition software is based on one-to-one or one-to-many matching; a technique that requires the digital storage of a data comparison for facial recognition, such as with the Apple iPhone’s face login feature. 

“Face recognition takes two photos, extracts mathematical information from them and compares it. So in essence, it's comparing two photos and the goal is to say ‘is this the same person or not?’” Grother told Nextgov/FCW. “[In] analysis — and this is Yoti’s point — you look at one photo, you figure out the age using some algorithm, and everything's gone.”

Without storing this information, Grother says the software is not facial recognition.

“It's always useful in all of these discussions to realize the algorithms are different,” he said. “They don't always thoroughly behave in the same way because different developers have got different amounts of training data and different amounts of aptitudes for building these things.”

Zweifel-Keegan agreed that there is a “major technical difference” between the two terms, despite the lack of a firm legal delineation.

“A lot of that just has to do with the fact that they're serving very different purposes, and so those two different AI systems are trained to do entirely different things,” he said. 

Still, data privacy advocates maintain that the technology is inherently biased based on historical assumptions of what a “standard” or “normal” face should look like, regardless of the programming within the system.

“As public opposition to facial recognition technology has grown, a lot of companies have tried to label what they do as ‘facial analysis’ to try to distinguish themselves from the negative narrative,” Caitlin Seeley George, the campaigns and managing director at Fight for the Future told Nextgov/FCW. “But no matter what they call it, the problems with the technology still remain. Even without comparing a specific image to a database of images, ‘facial analysis’ is based on the idea that an algorithm can determine a person's age, gender, or emotions from their image.”

Seeley George said that these biases have connections to scientifically debunked ideas like physiognomy — which claims to divine personality based on facial features — and can disproportionately target or discriminate against certain ethnicities, people with disabilities or gender non-conforming individuals.

“This biased technology should not determine whether or not someone can access online platforms and resources,” she said. “The FTC absolutely made the correct decision in this case.”

Tatiana Rice, the deputy director at the Future of Privacy Forum, also said that there are still risks to “facial analysis” technology, but that they are distinct from those of “facial recognition.” 

“It doesn't mean that there's no risk associated with facial analysis, but [the] risk is not tied to identity theft,” Rice told Nextgov/FCW. “The risk is tied to discrimination or inaccuracy, which is closer to risks that we see more broadly with just artificial intelligence than what I would say it's a unique risk to biometric recognition systems.”

Regardless of any differences acknowledged between the technology within “facial recognition” and “facial analysis” tools, the U.S. still lacks federal law mandating that companies creating and deploying facial analysis software delete the data processed by the technology, stoking privacy advocates’ concern over abuse.

“There's no law that says they have to delete it instantaneously in most of the country,” Schwartz said. “We're kind of in the Wild West and we've got a bunch of corporate pinky promises.”

Some state-level laws have taken the initiative to protect consumers from misuse of biometric technology. The Washington My Health My Data Act and Illinois Biometric Information Privacy Act, signed in 2023 and 2008 respectively, establish protections for citizens’ biometric data. Zweifel-Keegan noted that, regarding Illinois’s law, there are specifications regarding how the technology is built that impact how the legislation will take effect.

“As a matter of law, it's really not established what that kind of difference is [within facial scanning technologies],” he said. 

The remedy to this naturally hinges on aligning definitions, potentially through larger federal legislation. Rice said that this could be one of the net benefits of a national data privacy law that encompasses a diverse array of technologies.

“Because there can be ambiguity on what is a biometric, it's easier to just have a comprehensive privacy law that's tied to identity … that's covering all of your bases, all of the different privacy risks,” she said. 

Zweifel-Keegan said that, regardless of the best practices both law and private industry can offer, ensuring the public are informed and protected is paramount. 

“If we overly-normalize such systems, we consumers might become inured to the risks and might not recognize risky or problematic deployments of these types of technologies when they actually arise,” he said. “We need to make sure that we're building out ways to inform consumers of those differences if we really want them to have autonomy and control over how their information is used.”