Research Group - Responsible AI for Biometrics
Biometric systems are spreading worldwide and have a growing effect on our daily life. For instance, smart homes make use of voice recognition, face images are used at the airport to verify a person’s claimed identity, or fingerprints are utilized to unlock smartphones. The high performance of current biometric systems is driven by advances in deep learning. However, the success of adapting these techniques for recognition comes at the cost of major discriminatory factors such as fairness, privacy, reliability, and explainability concerns. Since biometrics systems are also increasingly involved in critical decision-making processes, such as in forensics and law enforcement, there is a growing need in developing responsible AI algorithms for biometric solutions.
- Fairness – Many biometric solutions are built on learning strategies that optimize total recognition performance. Since these learning strategies are strongly dependent on the underlying properties of the training data, the performance of the learned solutions is also strongly dependent on the properties of the dataset, such as demographics. This can lead to strong discriminatory effects, e.g. in forensic investigations or law enforcement.
- Privacy – The deeply-learned representation of an individual contains more information than just the individual’s identity. Privacy-sensitive information, such as gender, age, ethnicity, and health status, is deducible from such a representation. Since for many applications, the templates are expected to be used for recognition purposes only, the presence of other information raises major privacy issues. For instance, unauthorized access to an individual’s privacy-sensitive information can lead to unfair or unequal treatment of this individual.
- Reliability – The decision of biometric systems often has a strong impact on the users and wrong decisions might come at high financial and societal costs. Therefore, it is important to develop algorithms that are not only able to make accurate decisions but can also accurately state their own confidence in a decision. In practice, this can help avoid unjustifiable actions based on wrong decisions. Moreover, neglecting low-confident decisions completely or asking for a more confident system or a human operator instead will further reduce the chance of making an error.
- Explainability – Current biometric recognition systems mainly provide comparison scores and matching decisions to the user without justifications on how the decision was obtained. This is further compounded by the black-box character of current AI-based biometric solutions. The lack of transparency prevents humans from verifying, interpreting, and understanding the reasoning behind a system and how particular decisions are made. Explainable biometrics aims at making the recognition process understandable for humans while preserving its high performance.
Head
Junior Research Group Leader -
Office: F2.104
Phone: +49 5251 60-6657
E-mail: philipp.terhoerst@uni-paderborn.de
Web: Homepage
Office hours:
Upon request
Sekretäriat
Secretary - Secretariat Dr.-Ing. Philipp Terhörst
Office: F2.303
Phone: +49 5251 60-6665
E-mail: lydia.kreiss@uni-paderborn.de
Office hours:
Monday - Thursday: 9:00 - 2:00 h
Publications
A Comprehensive Study on Face Recognition Biases Beyond Demographics
P. Terhörst, J.N. Kolf, M. Huber, F. Kirchbuchner, N. Damer, A.M. Moreno, J. Fierrez, A. Kuijper, IEEE Transactions on Technology and Society 3 (2022) 16–30.
Verification of Sitter Identity Across Historical Portrait Paintings by Confidence-aware Face Recognition
M. Huber, P. Terhörst, A.T. Luu, F. Kirchbuchner, N. Damer, in: 26th International Conference on Pattern Recognition, ICPR 2022, Montreal, QC, Canada, August 21-25, 2022, IEEE, 2022, pp. 938–944.
An Attack on Facial Soft-Biometric Privacy Enhancement
D.O. Roig, C. Rathgeb, P. Drozdowski, P. Terhörst, V. Struc, C. Busch, IEEE Trans. Biom. Behav. Identity Sci. 4 (2022) 263–275.
Stating Comparison Score Uncertainty and Verification Decision Confidence Towards Transparent Face Recognition
M. Huber, P. Terhörst, F. Kirchbuchner, N. Damer, A. Kuijper, 33nd British Machine Vision Conference 2022 (2022).
MiDeCon: Unsupervised and Accurate Fingerprint and Minutia Quality Assessment based on Minutia Detection Confidence
P. Terhörst, A. Boller, N. Damer, F. Kirchbuchner, A. Kuijper, in: International IEEE Joint Conference on Biometrics, IJCB 2021, Shenzhen, China, August 4-7, 2021, IEEE, 2021, pp. 1–8.
Privacy-Enhancing Face Biometrics: A Comprehensive Survey
B. Meden, P. Rot, P. Terhörst, N. Damer, A. Kuijper, W.J. Scheirer, A. Ross, P. Peer, V. Struc, IEEE Trans. Inf. Forensics Secur. 16 (2021) 4147–4183.
MAAD-Face: A Massively Annotated Attribute Dataset for Face Images
P. Terhörst, D. Fährmann, J.N. Kolf, N. Damer, F. Kirchbuchner, A. Kuijper, IEEE Trans. Inf. Forensics Secur. 16 (2021) 3942–3957.
Mitigating Soft-Biometric Driven Bias and Privacy Concerns in Face Recognition Systems
P. Terhörst, Mitigating Soft-Biometric Driven Bias and Privacy Concerns in Face Recognition Systems, Technical University of Darmstadt, Germany, 2021.
On Soft-Biometric Information Stored in Biometric Face Embeddings
P. Terhörst, D. Fährmann, N. Damer, F. Kirchbuchner, A. Kuijper, IEEE Trans. Biom. Behav. Identity Sci. 3 (2021) 519–534.
Show all publications