Sie haben Javascript deaktiviert!
Sie haben versucht eine Funktion zu nutzen, die nur mit Javascript möglich ist. Um sämtliche Funktionalitäten unserer Internetseite zu nutzen, aktivieren Sie bitte Javascript in Ihrem Browser.

AG Codes and Kryptographie Bildinformationen anzeigen

AG Codes and Kryptographie

Offene Themen für Arbeiten


Proxy Reencryption for SFC-ABE

Attribute based encryption (ABE) schemes allow to implement efficient and fine grained cryptographic access control to data. In ABE, data is encrypted according to a given access structure. Users possess keys according to their rights or attributes, respectively. A user is able to decrypt a ciphertext and hence to access the corresponding data if and only if his attributes satsify the access structure of the ciphertext.

The decryption of ABE ciphertexts is relatively complex. In order to be able to decrypt ABE ciphertexts on resource constrained devices, Green et. al proposed a proxy reencryption scheme for ABE [1]. Here, a proxy transforms an ABE ciphertext into a conventional ElGamal ciphertext with the help of an ABE transformation key. Then, the ElGamal ciphertext can be efficiently decrypted in the standard way. Even with the transformation key, the proxy obtains no information about the encrypted data.

The thesis deals with a subset of the following aspects of the proxy-reencryption scheme of [1]:

  1. In [1], only an outline of the security proof is  given. The task is  to complete this outline into a rigorous proof of security.
  2. The re-encryption scheme of [1] is based on the ABE encryption scheme of [2]. The task is to generalize the approach of [1] such that it can be applied to other ABE schemes like, e.g., [3].
  3. In [1], the Fujisaki-Okamoto transform is used to achieve CCA security in the random oracle model. The construction of [1] is proof of concept  and uses a one-time pad to encrypt the message. For practical application it is preferred to use the scheme in combination with a symmetric block mode in a hybrid encryption scheme. The task is to adapt the construction of [1] accordingly.
  4. The construction of [1] assures confidentiality of the message also against malicious adversaries. Authenticity is not guaranteed with malicious proxies but [1] outlines an extension to provide also authenticity. The task is to formally specify the extended scheme and proof its security.
  5. The research group Codes&Cryptography develops a Java framework for pairing based cryptography. The task is to extend this framework and include the scheme of [1], preferably with the extension of Task2, Task3, and Task4.
  6. It is planned to use the approach of [1] in the collaborative research project “Securing the financial cloud (SFC)”. The task is to apply the scheme of [1] to an SFC use-case.

The focus of the thesis will be selected according to the background of the student.

The thesis will be supervised by Paderborn University, research group Codes & Cryptography and the Wincor Nixdorf International GmbH.

Language: English or German.


  • [1] Matthew Green, Susan Hohenberger, and Brent Waters. "Outsourcing the Decryption of ABE Ciphertexts." USENIX Security Symposium. Vol. 2011. No. 3. 2011.
  • [2]: Brent Waters. "Ciphertext-policy attribute-based encryption: An expressive, efficient, and provably secure realization." Public Key Cryptography–PKC 2011. Springer Berlin Heidelberg, 2011. pp. 53-70.
  • [3]: Yannis Rouselakis, and Brent Waters. "Practical constructions and new proof methods for large universe attribute-based encryption." Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security. ACM, 2013.


Communication-optimal accumulators based on bilinear maps

A cryptographic accumulator is a constant-size representation of a (large) set S. For every element in S, there is a short witness such that anyone given the accumulator and the witness can verify that indeed, the element is included in the original set S. For every element not in S, finding such witnesses is computationally intractable (under the usual cryptographic assumptions). 

A typical application for accumulators is revocation (e.g., of access rights). When someone gets certain access rights, their ID is included in some public accumulator. To get access, one simply computes a witness for his ID in the accumulator and supplies that to the verifier. If at some point, access needs to be revoked, the responsible party can simply remove the ID from the accumulator, preventing the party from gaining access in the future.

Usually, whenever the accumulator changes to a new set S', every owner of an accumulated ID in S' needs to update his witness so that it fits the accumulator for S' instead of S. A desirable trait of accumulators is being able to update witnesses incrementally instead of having to recompute witnesses for new sets from scratch. Baldimtsi et al. recently published an accumulator with the unique property that if S' is a superset of S (i.e. only additions occured), no witness updates are necessary at all. They base their construction on the strong RSA assumption. 

However, in many contexts RSA-based accumulators are inconvenient. Many constructions that would want to use accumulators for revocation are based on prime-order groups with bilinear maps, which makes them incompatible with the published accumulator.

The rough idea behind this thesis topic is to first understand the construction of Baldimtsi et al. and to provide detailed proofs for their claims (they only sketch proof ideas in their paper). Then, you should transfer their ideas to the prime-order group setting, making them applicable for a large range of modern cryptographic constructions. For this, the accumulator by Nguyen serves as a base construction for an accumulator that is extendable with methods similar to Baldimtsi's. 

Experimentelle Untersuchung des SEM Algorithmus

Eine typische Aufgabe in der Clusteranalyse ist, Daten durch Wahrscheinlichkeitsverteilungen zu beschreiben. Dabei sind Daten gegeben durch eine D-dimensionale Punktemenge X und es soll eine Dichtefunktion p aus einer bestimmten Familie von Dichtefunktionen gefunden werden, die die Punktemenge am besten beschreibt. Wie gut die Verteilung p die Menge X beschreibt, lässt sich anhand des Produktes der Dichten der einzelnen Punkte messen, d.h. \prod_x\in X p(x). Dieses Problem wird auch das Maximum-Likelihood Estimation (MLE) Problem genannt.

Um dieses Problem zu lösen, gibt es den Expectation Maximization (EM) Algorithmus [1]. Dieser bekommt als Eingabe eine initiale Dichtefunktion, die er dann Schritt für Schritt versucht zu verbessern. Der sogenannte Stochastic EM (SEM) Algorithmus ist eine randomisierte Variante des EM Algorithmus, der einige Berechnungen im (deterministischen) EM Algorithmus durch ein Zufallsexperiment ersetzt, das die Berechnungen imitieren soll. Der Vorteil dabei ist, dass sich die Laufzeit gegenüber dem EM Algorithmus verbessert. Die Frage ist, wie gut der SEM den EM Algorithmus tatsächlich imitiert.

Eine typische Familie von Verteilungen, für die das MLE Problem betrachtet wird, sind Mixturen von Gaußverteilungen. Wie gut der SEM den EM Algorithmus in diesem Fall imitiert, wurde bereits in [2] (experimentell und theoretisch) untersucht. Im Rahmen dieser Bachelorarbeit sollen der SEM und EM Algorithmus für weitere, einfache Verteilungen (z.B. die Exponentialverteilung) verglichen werden.

[1] Christopher M. Bishop: Pattern Recognition and Machine Learning (Information Science and Statistics).

[2] Johannes Blömer, Kathrin Bujna, Daniel Kuntze: A Theoretical and Experimental Comparison of the EM and SEM Algorithm. In Proceedings of the 22nd International Conference on Pattern Recognition (ICPR) 2014, Stockholm, Sweden, pp. 1419-1424, IEEE, 2014.

Die Universität der Informationsgesellschaft