Biometrics at the crossroads of the AI Act and the GDPR

Richard Lawne of Fieldfisher explains how organisations should prepare if using or seeking to deploy biometric technologies.

Biometric technologies have been described as “the plutonium of AI” and “the most uniquely dangerous surveillance mechanism ever invented”. At their core, they analyse individuals based on physiological and behavioural attributes such as fingerprints, facial geometry, iris patterns, voice, eye movement, heart rate, gait, and even DNA. Traditionally, biometrics have been used for identification, including one-to-one authentication (e.g. unlocking devices) and one-to-many identification (e.g. crowd surveillance). However, in recent years, their use has expanded beyond identification into deeper profiling, with AI-driven applications claiming to infer people’s emotions, personality traits, and other characteristics from physical appearance alone. These capabilities are being adopted across different sectors, raising significant legal and ethical concerns. Law enforcement uses biometrics to predict criminal behaviour, businesses leverage them to gauge customer sentiment, personalise services, and enforce age and gender restrictions, and employers use them to assess candidates and monitor employee productivity.

Continue Reading

International Report subscribers, please login to access the full article

LOGIN

If you wish to subscribe, please see our subscription information.

Subscribe