Open Final Theses
If one of the topics listed here appeals to you, please contact the person listed. Even if none of the topics match your interests, we would be happy to hear from you so that we can find a suitable topic together. For your orientation, we recommend that you take a look at the research focus areas of the working group.
Bachelor's Theses
In lattice cryptography there are many parameters. Many, if not all useful theorems require certain bounds on these parameters. Examples for these parameters are the length of some vectors under the L2 norm or the infinity norm, the spectral norm of matrices of certain distributions, or the so-called smoothing parameter.
When instantiating cryptographic schemes based on lattices, one has to decide on values for these parameters, such that security still holds. However, theoretical bounds can often be quite conservative, leading to not-so efficient schemes.
The question now is whether there are heuristics with which one could choose values for the parameters and how much these heuristics could improve the efficiency of the cryptographic schemes.
Your task is to identify interesting parameters, to create and implement heuristic tests for bounds and to compare the heuristics to the theoretical bounds. Afterwards you compare the efficiency of instantiations of some cryptographic schemes based on your heuristics and theoretical bounds.
A Decade of Lattice Cryptography: https://eprint.iacr.org/2015/939.pdf A good starting point to learn about lattices.
Supervisor: Laurens Porzenheim Mail
Secure Multiparty Computation (MPC) allows several parties to jointly compute a function while keeping their inputs private. Different frameworks describe the security and reliability of such protocols. Traditional models define adversaries as honest-but-curious or malicious. Universally composable security extends this to settings where many protocols may run in parallel. More recently, rational approaches model participants as self-interested agents who act strategically according to their incentives.
This thesis aims to provide a comparison of these approaches by studying how their goals and efficiency differ depending on the considered adversarial behavior and network setting. The work will start with a literature research and then includes practical experiments or implementations to estimate efficiency and scalability. The outcome should give a structured overview of how traditionally secure MPC and rational MPC frameworks relate to each other and what trade‑offs appear in practice.
Contact: Henrik Bröcher
Master's Theses
Fuzzy k-Means is a popular generalization of the classical k-Means problem to soft clusterings. From an algorithmic perspective however, it is much more difficult to compute provably good solutions. Typically, these problems use the squared Euclidean distance to measure how far data points are apart. The squared Euclidean distance is part of the mu-similar Bregman divergences, a large class of dissimilarity measures sharing desirable characteristics.
Using sampling techniques to find good approximations of optimal centroids has proven to work for both, the k-Means problem using mu-similar Bregman divergences and also for the Fuzzy k-Means problem using the squared Euclidean distance. The goal of this thesis is to explore whether this can actually be combined to obtain a good approximation algorithm for the Fuzzy k-Means problem using mu-similar Bregman divergences.
Contact: Johannes Blömer
A consequence of the existence of the Shor’s algorithm is such a statement of quantum advantage [QA]: simulating a full quantum-mechanical experiment cannot be done by a classical computer [CC] in polynomial time, unless factoring integers can be as well.
However, (1) factoring integers on a CC in polynomial time is not proven but only conjectured to be impossible in theoretical computer science and (2) Shor’s algorithm requires a robust, fault-tolerant quantum computer [QC] which is beyond current technological level.
Both issues are sidestepped if QA was to be based on hardness of computing permanents of arbitrary square complex matrix (a permanent is, to put it shortly, like a determinant but only with + signs), which is known to be a #P-complete problem. Regarding (1), since solving #P-complete problems is generally considered harder than solving P problems, this gives a more solid reasoning for QA than the unproven difficulty of factoring. Regarding (2), permanents can be estimated by a quantum experiment of boson sampling - a much simpler task than constructing a QC. However, the approximation of a permanent this way is hardly error-free and a full proof of QA based on boson sampling (itself already a complicated statement) requires a series of specialized techniques to deal with it.
The main task is to perform an overview of worst-to-average case (in terms of errors) reduction. Any other tangents for an overview are welcome. Optionally, the feasibility of transforming boson sampling experiments into an actual QC may be reviewed.
Contact: Dr. Stanislaw Soltan
A development of a possible Shor’s algorithm variant [SAV] with fixed number of resources sparked an interest in a quantum computation model based on hybrid qubit-quantum oscillator systems. In particular, the energy requirements for a quantum computer [QC] of this construction are of a great concern.
An algorithm on hybrid QC consists of: state preparation, qubit-oscillator circuit, measurements and classical post-processing. The general analysis of these steps (out of which only state preparation is made concrete) provides the bounds on errors given number of modes and maximal allowed energy.
The possible task could be to apply the energy-dependent error bounds to the SAV, which leads to estimating its usefulness in a practical setting. This involves reconciling differences between the SAV and the constructions used to derive bounds.
Note that the energy restriction applies to the highest value of energy contained in a single mode at any point in the algorithm. Therefore, this task can be split into parts consisting of analyzing different stages of the algorithm. The analysis of one chosen part may prove fruitful enough. As an optional goal one may investigate if some more energy-friendly state preparation is possible.
Contact: Dr. Stanislaw Soltan
Garay et al. introduced the Rational Protocol Design (RPD) framework (https://eprint.iacr.org/2013/496.pdf) to study cryptographic protocols when participants act rationally rather than being simply honest or malicious. Their model combines game‑theoretic reasoning with simulation‑based cryptographic proofs, allowing the analysis of protocols in terms of both security and incentives.
The goal of this thesis is to understand and present this framework in depth. This includes studying the main paper, explaining the underlying concepts and structure of the model, and identify follow‑up research that extends or applies RPD. The work should clarify how the framework relates to traditional security notions and discuss in which situations it provides advantages or limitations.
Contact: Henrik Bröcher