Research Group - Responsible AI for Biometrics

Biometric systems are spreading worldwide and have a growing effect on our daily life. For instance, smart homes make use of voice recognition, face images are used at the airport to verify a person’s claimed identity, or fingerprints are utilized to unlock smartphones. The high performance of current biometric systems is driven by advances in deep learning. However, the success of adapting these techniques for recognition comes at the cost of major discriminatory factors such as fairness, privacy, reliability, and explainability concerns. Since biometrics systems are also increasingly involved in critical decision-making processes, such as in forensics and law enforcement, there is a growing need in developing responsible AI algorithms for biometric solutions.

  • Fairness – Many biometric solutions are built on learning strategies that optimize total recognition performance. Since these learning strategies are strongly dependent on the underlying properties of the training data, the performance of the learned solutions is also strongly dependent on the properties of the dataset, such as demographics. This can lead to strong discriminatory effects, e.g. in forensic investigations or law enforcement.
  • Privacy – The deeply-learned representation of an individual contains more information than just the individual’s identity. Privacy-sensitive information, such as gender, age, ethnicity, and health status, is deducible from such a representation. Since for many applications, the templates are expected to be used for recognition purposes only, the presence of other information raises major privacy issues. For instance, unauthorized access to an individual’s privacy-sensitive information can lead to unfair or unequal treatment of this individual.
  • Reliability – The decision of biometric systems often has a strong impact on the users and wrong decisions might come at high financial and societal costs. Therefore, it is important to develop algorithms that are not only able to make accurate decisions but can also accurately state their own confidence in a decision. In practice, this can help avoid unjustifiable actions based on wrong decisions. Moreover, neglecting low-confident decisions completely or asking for a more confident system or a human operator instead will further reduce the chance of making an error.
  • Explainability – Current biometric recognition systems mainly provide comparison scores and matching decisions to the user without justifications on how the decision was obtained. This is further compounded by the black-box character of current AI-based biometric solutions. The lack of transparency prevents humans from verifying, interpreting, and understanding the reasoning behind a system and how particular decisions are made. Explainable biometrics aims at making the recognition process understandable for humans while preserving its high performance.

Head

business-card image

Dr.-Ing. Philipp Terhörst

Responsible AI for Biometrics

Research group leader - Responsible AI for Biometrics

Write email +49 5251 60-6657