Offene Themen für Abschlussarbeiten
Wenn Sie eines der hier aufgeführten Themen anspricht, nehmen Sie bitte Kontakt mit der angegebenen Ansprechperson auf. Themen, die sowohl für eine Bachelorarbeit als auch eine Masterarbeit in Frage kommen sind durch ein Sternchen (*) im Titel gekennzeichnet.
Auch wenn keines der Themen Ihren Interessen entspricht, freuen wir uns über Ihre Nachricht um gemeinsam ein passendes Thema zu finden. Zur Orientierung empfehlen wir Ihnen zudem, einen Blick auf die Forschungsschwerpunkte der Arbeitsgruppe zu werfen.
Bachelorarbeiten
In lattice cryptography there are many parameters. Many, if not all useful theorems require certain bounds on these parameters. Examples for these parameters are the length of some vectors under the L2 norm or the infinity norm, the spectral norm of matrices of certain distributions, or the so-called smoothing parameter.
When instantiating cryptographic schemes based on lattices, one has to decide on values for these parameters, such that security still holds. However, theoretical bounds can often be quite conservative, leading to not-so efficient schemes.
The question now is whether there are heuristics with which one could choose values for the parameters and how much these heuristics could improve the efficiency of the cryptographic schemes.
Your task is to identify interesting parameters, to create and implement heuristic tests for bounds and to compare the heuristics to the theoretical bounds. Afterwards you compare the efficiency of instantiations of some cryptographic schemes based on your heuristics and theoretical bounds.
A Decade of Lattice Cryptography: https://eprint.iacr.org/2015/939.pdf A good starting point to learn about lattices.
Supervisor: Laurens Porzenheim Mail
Secure Multiparty Computation (MPC) allows several parties to jointly compute a function while keeping their inputs private. Different frameworks describe the security and reliability of such protocols. Traditional models define adversaries as honest-but-curious or malicious. Universally composable security extends this to settings where many protocols may run in parallel. More recently, rational approaches model participants as self-interested agents who act strategically according to their incentives.
This thesis aims to provide a comparison of these approaches by studying how their goals and efficiency differ depending on the considered adversarial behavior and network setting. The work will start with a literature research and then includes practical experiments or implementations to estimate efficiency and scalability. The outcome should give a structured overview of how traditionally secure MPC and rational MPC frameworks relate to each other and what trade‑offs appear in practice.
Contact: Henrik Bröcher
Gottesman-Kitaev-Preskill (GKP) code as one iconic representative of continuous variable quantum error correction codes, also have a deep relation with lattice. Therefore with this relation, we can construct GKP codes from SIS lattice and thus call them SIS-GKP codes. There are many parameters in the construction, and for different parameters, the code distance will also be different. This thesis is meant to study GKP code and its relation with lattice; then study the dependence of SIS-GKP codes’ distance with the construction parameters; finally give an mathematical and/or physical explanation for this dependency.
Related Papers
- GKP code (Original: https://arxiv.org/pdf/quant-ph/0008040; More computer scientific focus: https://arxiv.org/pdf/2507.06943)
- GKP code with lattice: https://arxiv.org/pdf/2109.14645v3
- SIS-GKP code: https://arxiv.org/pdf/2509.10183
Contact: Yinzi Xiao
Content:
As the era of quantum computing approaches, classical cryptographic systems have faced a potential challenge. Post-quantum cryptography is meant to create new protocols that are robust against quantum computers as well. However, another method is to use quantum systems inside the cryptographic protocols. In this thesis topic, research begins by searching and collecting current quantum protocols, comparing their operational mechanics and security guarantees against their classical counterparts to highlight distinct advantages in information-theoretic security versus disadvantages in implementation complexity. A core component of this study involves a rigorous, deep-dive analysis of a selected protocol, dissecting its schematic architecture and mathematically verifying its security proofs against coherent attacks. If possible the thesis also aims to propose theoretical extensions or architectural improvements to the selected protocol, addressing critical limitations such as noise resilience, channel loss, or network scalability.
(* This topic is also suitable for a master's thesis.)
Related Papers:
- Quantum Key Recycling: https://eprint.iacr.org/2016/1122.pdf
- Quantum Secure Direct Communication: https://academic.oup.com/nsr/article/12/8/nwaf096/8087354
- Quantum Public-Key Encryption: https://arxiv.org/pdf/2304.02999
- Quantum Digital Signatures: https://www.mdpi.com/1099-4300/27/11/1179
- Quantum Key Distribution (twin-field): https://arxiv.org/pdf/2510.26320
Contact: Yinzi Xiao
Content:
Quantum error-correcting codes and quantum fault tolerant computing are essential to realize quantum computers. Graph codes and stabiler codes are two types of quantum error-correcting codes. Virtually all practically promising quantum error-correcting codes are stabilizer codes. It is well-know that every graph code can be represented as a stabilizer code, and vice versa. There are at three different proofs for this result, applying not just to qubit error-correcting codes, but more generally to qudit codes. In this thesis, a unified treatment of general qudit graph codes and qudit stabilizer codes should be given. Based on this, a elementary proof for the equivalence of graph codes and stabilizer codes should be given.
(* This topic is also suitable for a master's thesis.)
Contact: Prof. Dr. Blömer
Masterarbeiten
Fuzzy k-Means is a popular generalization of the classical k-Means problem to soft clusterings. From an algorithmic perspective however, it is much more difficult to compute provably good solutions. Typically, these problems use the squared Euclidean distance to measure how far data points are apart. The squared Euclidean distance is part of the mu-similar Bregman divergences, a large class of dissimilarity measures sharing desirable characteristics.
Using sampling techniques to find good approximations of optimal centroids has proven to work for both, the k-Means problem using mu-similar Bregman divergences and also for the Fuzzy k-Means problem using the squared Euclidean distance. The goal of this thesis is to explore whether this can actually be combined to obtain a good approximation algorithm for the Fuzzy k-Means problem using mu-similar Bregman divergences.
Contact: Johannes Blömer
A consequence of the existence of the Shor’s algorithm is such a statement of quantum advantage [QA]: simulating a full quantum-mechanical experiment cannot be done by a classical computer [CC] in polynomial time, unless factoring integers can be as well.
However, (1) factoring integers on a CC in polynomial time is not proven but only conjectured to be impossible in theoretical computer science and (2) Shor’s algorithm requires a robust, fault-tolerant quantum computer [QC] which is beyond current technological level.
Both issues are sidestepped if QA was to be based on hardness of computing permanents of arbitrary square complex matrix (a permanent is, to put it shortly, like a determinant but only with + signs), which is known to be a #P-complete problem. Regarding (1), since solving #P-complete problems is generally considered harder than solving P problems, this gives a more solid reasoning for QA than the unproven difficulty of factoring. Regarding (2), permanents can be estimated by a quantum experiment of boson sampling - a much simpler task than constructing a QC. However, the approximation of a permanent this way is hardly error-free and a full proof of QA based on boson sampling (itself already a complicated statement) requires a series of specialized techniques to deal with it.
The main task is to perform an overview of worst-to-average case (in terms of errors) reduction. Any other tangents for an overview are welcome. Optionally, the feasibility of transforming boson sampling experiments into an actual QC may be reviewed.
Contact: Dr. Stanislaw Soltan
Content:
A development of a possible Shor’s algorithm variant [SAV] with fixed number of resources sparked an interest in a quantum computation model based on hybrid qubit-quantum oscillator systems. In particular, the energy requirements for a quantum computer [QC] of this construction are of a great concern.
An algorithm on hybrid QC consists of: state preparation, qubit-oscillator circuit, measurements and classical post-processing. The general analysis of these steps (out of which only state preparation is made concrete) provides the bounds on errors given number of modes and maximal allowed energy.
The possible task could be to apply the energy-dependent error bounds to the SAV, which leads to estimating its usefulness in a practical setting. This involves reconciling differences between the SAV and the constructions used to derive bounds.
Note that the energy restriction applies to the highest value of energy contained in a single mode at any point in the algorithm. Therefore, this task can be split into parts consisting of analyzing different stages of the algorithm. The analysis of one chosen part may prove fruitful enough. As an optional goal one may investigate if some more energy-friendly state preparation is possible.
Related Papers:
Contact: Dr. Stanislaw Soltan
Garay et al. introduced the Rational Protocol Design (RPD) framework (https://eprint.iacr.org/2013/496.pdf) to study cryptographic protocols when participants act rationally rather than being simply honest or malicious. Their model combines game‑theoretic reasoning with simulation‑based cryptographic proofs, allowing the analysis of protocols in terms of both security and incentives.
The goal of this thesis is to understand and present this framework in depth. This includes studying the main paper, explaining the underlying concepts and structure of the model, and identify follow‑up research that extends or applies RPD. The work should clarify how the framework relates to traditional security notions and discuss in which situations it provides advantages or limitations.
Contact: Henrik Bröcher
Content:
As the era of quantum computing approaches, classical cryptographic systems have faced a potential challenge. Post-quantum cryptography is meant to create new protocols that are robust against quantum computers as well. However, another method is to use quantum systems inside the cryptographic protocols. In this thesis topic, research begins by searching and collecting current quantum protocols, comparing their operational mechanics and security guarantees against their classical counterparts to highlight distinct advantages in information-theoretic security versus disadvantages in implementation complexity. A core component of this study involves a rigorous, deep-dive analysis of a selected protocol, dissecting its schematic architecture and mathematically verifying its security proofs against coherent attacks. If possible the thesis also aims to propose theoretical extensions or architectural improvements to the selected protocol, addressing critical limitations such as noise resilience, channel loss, or network scalability.
(* This topic is also suitable for a bachelor's thesis.)
Related Papers:
- Quantum Key Recycling: https://eprint.iacr.org/2016/1122.pdf
- Quantum Secure Direct Communication: https://academic.oup.com/nsr/article/12/8/nwaf096/8087354
- Quantum Public-Key Encryption: https://arxiv.org/pdf/2304.02999
- Quantum Digital Signatures: https://www.mdpi.com/1099-4300/27/11/1179
- Quantum Key Distribution (twin-field): https://arxiv.org/pdf/2510.26320
Contact: Yinzi Xiao
Content:
Code distance is an important property for a quantum code. It shows how well the code can resist against errors. In general, calculating code distance for stabilizer codes is NP. However, if the stabilizer codes have certain structure, it may be possible to calculate their code distance with higher efficiency. Codeword stabilized (CWS) graph code is a kind of stabilizer code, it combines classical codes with graph states. In this thesis topic, we want to investigate whether there exists efficient algorithms to calculate the code distance of CWS-graph codes, given that the CWS-graph codes have certain regular pattern.
Related Papers:
- CWS Code: https://arxiv.org/pdf/0708.1021
- Graph States: https://arxiv.org/pdf/quant-ph/0602096
- Graph Codes: https://arxiv.org/pdf/0712.1979
Contact: Yinzi Xiao
Content:
By Shortest Vector Problem (SVP) we denote the (computationally hard) task of finding a shortest nonzero vector in an arbitrary lattice. By Discrete Gaussian Sampler (DGS) we mean a probabilistic algorithm outputting lattice points with probabilities governed by Gaussian distribution centered around the central point of the lattice. DGS with relatively “flat” distributions are known.
In 2015 Aggarwal et al showed how to construct a probabilistic SVP solver out of a (”not-flat”) DGS and, crucially, how to convert the set of points outputted by “flat” DGS into a smaller set governed by “not-flat” DGS. The second result relies heavily on a rejection-sampling procedure. However, in 2017 Aggarwal and Stephen-Dawidowitz showed that this is not required (and thus the size reduction of a set is not as drastic).
Unfortunately, while this improved algorithm may be “embarrasingly simple” as they claim, the 2017 paper is not simple to read. Therefore, the task at hand is to review the paper (and other relevant literature) and present a unified view of the DGS and SVP algorithms proposed there. Optionally, same could be done for the Closest Vector Problem that is also the subject of the paper.
Alternatively / optionally, a potential improvement to a reduction of SVP to DGS may be investigated. As stated, the DGS implementation involves here converting the output of a DGS into the set governed by a more concentrated DGS. If the results pre- and post conversion are “independent enough”, perhaps the number of calls to DGS (and thus resources needed) can be reduced.
Related Papers
- https://arxiv.org/abs/1709.01535
- Solving the Shortest Vector Problem in 2^n Time Using Discrete Gaussian Sampling, full version: https://arxiv.org/abs/1412.7994
Contact: Yinzi Xiao