We do research on trustworthy human language technologies
The Natural Language Processing group focuses on trustworthy human language technologies. This encompasses privacy-preserving NLP, NLP in the legal domain, and more.
Teaching
L.079.05551 Natural Language Processing with Deep Learning (BSc)
- Download the lecture slides under open-source licences on our GitHub page: https://github.com/trusthlt/nlp-with-deep-learning-lectures/
- Watch our live-recorded lectures on YouTube: https://www.youtube.com/playlist?list=PL6WLGVNe6ZcB00apoxMtj7WSUOlpm2Xvl
- Join the discussion on Discord (link provided for students only on the PANDA page https://panda.uni-paderborn.de/course/view.php?id=50755 )
MSc: Seminar on selected topics on privacy-preserving natural language processing
Selected recent publications
Differentially Private Natural Language Models: Recent Advances and Future Directions
L. Hu, I. Habernal, L. Shen, D. Wang, in: Y. Graham, M. Purver (Eds.), Findings of the Association for Computational Linguistics: EACL 2024, St. Julian’s, Malta, March 17-22, 2024, Association for Computational Linguistics, 2024, pp. 478–499.
DP-NMT: Scalable Differentially Private Machine Translation
T. Igamberdiev, D.N.L. Vu, F. Kuennecke, Z. Yu, J. Holmer, I. Habernal, in: N. Aletras, O. De Clercq (Eds.), Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations, Association for Computational Linguistics, St. Julians, Malta, 2024, pp. 94–105.
Privacy-Preserving Natural Language Processing
I. Habernal, F. Mireshghallah, P. Thaine, S. Ghanavati, O. Feyisetan, in: Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: Tutorial Abstracts, Association for Computational Linguistics, 2023.
Trade-Offs Between Fairness and Privacy in Language Modeling
C. Matzken, S. Eger, I. Habernal, in: Findings of the Association for Computational Linguistics: ACL 2023, Association for Computational Linguistics, 2023.
Crowdsourcing on Sensitive Data with Privacy-Preserving Text Rewriting
N. Mouhammad, J. Daxenberger, B. Schiller, I. Habernal, in: Proceedings of the 17th Linguistic Annotation Workshop (LAW-XVII), Association for Computational Linguistics, 2023.
Privacy-Preserving Models for Legal Natural Language Processing
Y. Yin, I. Habernal, in: Proceedings of the Natural Legal Language Processing Workshop 2022, Association for Computational Linguistics, 2023.
How Much User Context Do We Need? Privacy by Design in Mental Health NLP Applications
R. Sawhney, A. Neerkaje, I. Habernal, L. Flek, Proceedings of the International AAAI Conference on Web and Social Media 17 (2023) 766–776.
One size does not fit all: Investigating strategies for differentially-private learning across NLP tasks
M. Senge, T. Igamberdiev, I. Habernal, in: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, 2023.
DP-BART for Privatized Text Rewriting under Local Differential Privacy
T. Igamberdiev, I. Habernal, in: Findings of the Association for Computational Linguistics: ACL 2023, Association for Computational Linguistics, 2023.
The Legal Argument Reasoning Task in Civil Procedure
L. Bongard, L. Held, I. Habernal, in: Proceedings of the Natural Legal Language Processing Workshop 2022, Association for Computational Linguistics, 2023.
Show all publications