Oberseminar Codes und Kryptographie

Termine

Das Oberseminar findet derzeit lediglich unregelmäßig statt und wird über diese Webseite oder unseren Mailverteiler angekündigt.

Vorträge SS 2021 und WS 2021-22

We look at the different definitions of simulation-extractability and give a rough "what is really used in papers" overview.

We presents the papers by On Signatures of Knowledge by Chase and Lysyanskaya and Relaxed Security Notions for Signatures of Knowledge by Fischlin and Onete.
We introduce the definitions and compare the results given in both papers.

We present recent results regarding lattice-based reputation systems.

We summarize progress and give an overview of zero-knowledge proofs of knowledge over different groups.

This talk presents the paper Anonymous Reputation Systems Achieving Full Dynamicity from Lattices by Kaafarani, Katsumata, and Solomon.

We present the reputation systems and security definitions by Jakob Juhnke.
In the first part of the talk we present an overview of reputation systems based on the systemization of knowledge paper SoK: Privacy-Preserving Reputation Systems by Gurtler and Goldberg.

We present the current state of the security architecture in the SFB (CRC) 901 T2 project.

We outline that secret keys are stored in hardware security modules of the servers and discuss how the incentive system was adpated to accommandate it. 

Vorträge WS 2020-21

The goal of secure multiparty computation (MPC) is the joint evaluation of, possibly randomized, functions based on private inputs without disclosing anything beyond correct results to the corresponding parties. Protocols for this task were designed under various assumptions and adversarial models like, for example, passive, active, and covert adversaries. Generally, such protocols either tackle a specific problem or can be used to jointly compute arbitrary functions.
In this talk we focus on techniques and developments in the SPDZ (pronounced: speedz) protocol family, which targets the evaluation of arithmetic circuits in an environment where all but one parties might be actively corrupted by a computationally bounded adversary. In a nutshell, SPDZ is split into an input-independent preprocessing ("offline") phase, where secret-shared correlated randomness is jointly generated, and into an online phase, where computation is performed on secret-shared data which is authenticated using information-theoretically secure MACs. Before revealing outputs, these MACs are verified using an efficient batch-check preventing attacks beyond aborting or changing inputs. Both offline and online phase have been improved over time, resulting in successors like SPDZ-2, MASCOT, and Overdrive. Compared to other protocols in this setting, these are rather efficient and have been used in practice. 

A Computer Assisted Proof (CAP) is a proof of a claim which was, at least partially, generated by a computer.
While CAPs have been employed in various ways in the last decades, there is still some scepticism towards their validity.
In this talk we take a short look at philosophical criticism of CAPs and then discuss when to apply CAPs, how to apply them, and the nature of typical pitfalls.
Specifically, we will look at three different propositions which have been proven by a CAP.

In lattice-based cryptography, the hardness of the LWE problem is a widely used assumption. However, LWE can be inefficient to use in practice. To help with this issue, several works introduced many algebraically structured versions of LWE, such as Ring-LWE. In this talk we aim to present the different variants of LWE and to show the relations between them.

Abstract for normal people: We’re taking a look at Bulletproofs, which ultimately (among many other things) give us a way to prove to any distrusting verifier that we know a satisfying assignment to a SAT formula without revealing the solution (and even more efficiently than sending the solution to the verifier in plain).

Abstract for people who do crypto all day: Are you tired of your Schnorr proofs being too big? You’re skeptical whether SNARKs really exist and only believe that DLOG is hard? Did you Pedersen-commit to your data and don’t want to have to decommit within an algebraic circuit? You don’t want trusted setup? Then come and learn about Bulletproofs!
Bulletproofs live in the “proof of knowledge of exponents” Sigma protocol universe. At its core, it’s a clever yet simple method to compress the preimage of a homomorphism. This method can be applied (1) to compress Schnorr-style proofs and (2) to get short zero-knowledge inner-product proofs. From the latter, one can construct efficient range proofs and somewhat efficient proofs for algebraic circuits. 

Structure-preserving signatures on equivalence classes (SPS-EQ introduced by Hanser and Slamanig, ASIACRYPT'14) are tremendously useful in many areas in cryptography. Here, not only one message is signed, but all multiples of this message (called the equivalence class). In the last 6 years they were used to build the most efficient schemes in attribute-based credentials, delegatable credentials, blind-signatures, self-blindable certificates, group signatures, sanitizable signatures, verifiable encrypted signatures, and incentive systems. That is a bunch of stuff and the reason for this is that SPS-EQ with its randomizability of signatures and messages according to the equivalence class offer the same features as combining a normal SPS scheme with non-interactive zero-knowledge proofs (NIZK like Groth-Sahai proof system). Basically you get rid of NIZK proofs by using SPS-EQ.
 
One downside of this direct approach is that in many cases where anonymity is concerned you only get selfless anonymity, i.e. no anonymity for users where the secret key was leaked. Hence in delegatable credentials from SPS-EQ the issuer can recognize credentials that he delegated.
 
In this talk we present the recent paper "Efficient Signatures on Randomizable Ciphertexts" by Balthazar Bauer and Georg Fuchsbauer that tackles this problem by introducing a new equivalence class on the messages to be signed. This presentation highlights the interesting combination of the SPS-EQ idea and an encryption scheme.

The leftover hash lemma is a classical technique in cryptography and complexity. It was used to construct pseudorandom generators and randomness extractors, to name just two examples. In recent years the leftover hash lemma has also been used to prove the security of lattice-based cryptographic primitives. In the talk I explain and prove the leftover hash lemma. Then I show its use in lattice-based cryptography by proving the security of two lattice-based encryption schemes introduced by Regev and Gentry, Peikert, Vaikuntanathan, respectively.

Vorträge WS 2019-20

Universally composable protocols allow a cryptographer to prove a
specific cryptographic task (e.g. signatures) secure once and to 
arbitrarily combine it with other secure protocols.
To define a supporting framework to achieve this is difficult and
very delicate. Hence, there are many composability
frameworks (some more general than others) that are still under
active development (one of them (called UC) ten years after 
the initial publication).
Camenisch, Krenn, Kuesters and Rausch recently introduced 
the universal composability framework called iUC at ASIACRYPT 2019.
iUC is a generalisation of another composability framework called the 
IITM model, but also incorporates important lessons learned 
by using the other existing composability frameworks such as 
UC, SUC, and GNUC.

We highlight how iUC simplifies and unifies the design of 
protocols with elegant modelling decisions and templates. 
Further, we explain the major differences to working with UC.

Adversarial samples are specially crafted so a machine learning system recognizes the sample as belonging to some class, while a human would classify the sample as belonging to another class. We explore the mathematical background of adversarial samples, coming to the conclusion that adversarial samples are not a bug, but a feature in many machine learning systems.

The talk is based upon a pre-print by by Shamir et al.: "A simple explanation for the existence of adversarial examples with small Hamming
distance," arXiv report 1901.10861

In secure multiparty computation (MPC), there are n parties P_i. Each party P_i holds a private input x_i. They then run an interactive protocol. The goal of the parties is to compute f(x_1, ..., x_n) for a public function f, without having to reveal any of their inputs to one another.

More specifically, all parties P_i learn only y = f(x_1, ..., x_n) (and its own input x_i), but do not learn anything about x_1,...,x_(i-1),x_(i+1),...,x_n (except what can be inferred from y). Additionally, no party should be able to change the result y other than by choosing its own input.

In a previous talk we have seen general MPC protocols for arbitrary computable functions f. Due to their computational overheads these protocols are impractable within real-world applications.

In the talk, we will present concrete approaches used to make MPC affordable in praxis. For example, we will cover techniques improving the efficiency of garbled circuits. Further, we introduce a concept for realizing MPC protocols for functions with RAM-like accesses to the inputs more efficiently.

In secure multiparty computation (MPC), there are n parties P_i. Each party P_i holds a private input x_i. They then run an interactive protocol. The goal of the parties is to compute f(x_1, ..., x_n) for a public function f, without having to reveal any of their inputs to one another. 
More specifically, all parties P_i learn only y = f(x_1, ..., x_n) (and its own input x_i), but do not learn anything about x_1,...,x_(i-1),x_(i+1),...,x_n (except what can be inferred from y). Additionally, no party should be able to change the result y other than by choosing its own input.

Under mild complexity assumptions, MPC protocols exist for arbitrary computations f (though those protocols often come with prohibitive computational overhead). 

In the talk, we will look at simple constructions and general techniques for MPC. 

There are numerous subclasses of TFNP which gain increasing attention in
recent years. From a cryptographic perspective, PPP has turned out to be
one of the most interesting ones. In this talk, we provide a rough
snapshot of the current state of the TFNP world, and discuss
advancements in the classification of cryptographic problems in this context.

When delegating computations to an untrusted cloud, one wants have guarantees about the result's correctness. One way to solve this is using homomorphic signatures that allow to compute a signature for the result algonside the computation on the authenticated data.

We present an extension to homomorphic signatures that allows homomorphic computation on data items authenticated by different users.

Vorträge WS 2018-19

Sets and dictionaries are data structures commonly found in applications. In this talk, we will discuss authenticated sets and dictionaries that can be used if the data structures need to provide cryptographic guarantees. Particularly, the data structures we present can be stored on untrusted servers. Yet, they guarantee that the server cannot remove elements from the data structures.
In contrast to common authenticated data structures, the data structures we present do not require the source of the data to be trusted.

Anonymous credentials allow users to prove possession of access rights without revealing their identity. In this talk, we'll cover some recent results for anonymous credentials and applications. In particular, we'll look at the new notion of updatable credentials (unpublished work by Blömer, Bobolz, Eidens).

Updatable credentials are parameterized by attributes. The credential's issuer can apply an update to these attributes without learning anything about the original attributes (hence preserving the credential holder's anonymity). While interesting on its own, we use this property to construct a privacy-preserving incentive system.

Incentive systems allow customers to incrementally gather points to spend on bonus items (in Germany, this was popularized by "Payback").
In our construction, users store their points in form of a credential with a "points" attribute. To issue k additional points, the provider can run a "+k" update on this attribute without learning anything about the user's identity or their point count. The major challenge for such systems is double-spending prevention, which will be explained in the talk.

The fuzzy K-means problem is a popular generalization of the well-known K-means problem to soft clusterings.
In this talk, we present the first coresets for fuzzy K-means with size linear in the dimension, polynomial in the
number of clusters, and poly-logarithmic in the number of points. We further discuss applications of these coresets:
the computation of a (1+\epsilon)-approximation for fuzzy K-means and how to maintain a coreset in an
insertion-only streaming setting, where data points arrive one-by-one.

This talk is a long version of the one I will hold at ISAAC 2018.

Vorträge SS 2018

We present the Gorbunov, Vaikuntanathan, Wichs construction of fully homomorphic signatures. The construction supports arithmetic circuits of arbitrary but fixed lengths. The signatures are built from a new primitive called homomorphic trapdoor functions. We discuss the connection between homomorphic signatures and homomorphic trapdoor functions. Finally, we show how to construct  secure homomorphic trapdoor functions using lattices and assuming the hardness of the SIS problem (small integer solutions).

In Attribute-Based Signatures an authority can generate signing keys
related to a set of attributes. These keys can be used to sign messages
if and only if the related attributes satisfy a pre-defined policy.
Anybody can verify the signature using public parameters to become
convinced that the signer's attributes satisfied the policy without
knowing the attributes. The security requirements are unforgeability of
signatures and key privacy (signatures should not expose the specific
signing key and attributes used).

In homomorphic signatures, a user signs some large data x using the
secret signing key and stores the signed data on a server (the data x
and the signature s). The server can then run some computation y=g(x) on
the signed data and homomorphically produce a short signature s'.
Anybody can verify the signature s' using the public verification key
and become convinced that y is the correct output of the computation g
over the user's data x, without needing to have the underlying data
itself. A secure homomorphic signature scheme must be unforgeable and
context hiding (a signature which certifies y as the output of some
computation g over the user's data should not reveal anything about the
underlying data beyond the output of the computation).

Recently, Rotem Tsabary presented an equivalence between attribute-based
signatures and homomorphic signatures (TCC 2017). Unfortunately, Tsabary
uses weaker security notions than the standard notions to prove the
equivalence. In this talk we discuss his results and point out some
subtleties in his construction which must be fixed to use
attribute-based signatures and homomorphic signatures equivalently.

When computing on authenticated data it can be useful to make results available to an audience that is unaware of the source data, and should not learn anything about the source data beyond what is revealed by the result. However, receivers of the result should be able to verify the result.

Homomorphic signatures allow the derivation of signatures on the result from signatures on the inputs. Receivers of the result can then check the signature on the result and be assured that the result was computed from authenticated data originating from a given source.

In my talk, I will present relevant security notions and several simple, yet useful homomorphic signature schemes.

We investigate the use of trusted execution environments (TEEs, such as Intel's SGX) for an anonymous
communication infrastructure over untrusted networks. For this, we present the general idea of
exploiting trusted execution environments for the purpose of anonymous communication, including
a continuous-time security framework that models strong anonymity guarantees in the presence
of an adversary that observes all network traffic and can adaptively corrupt a constant fraction of
participating nodes. In our framework, a participating node can generate a number of unlinkable
pseudonyms. Messages are sent from and to pseudonyms, allowing both senders and receivers of
messages to remain anonymous. We introduce a concrete construction, which shows viability of
our TEE-based approach to anonymous communication. The construction draws from techniques
from cryptography and overlay networks. Our techniques are very general and can be used as a
basis for future constructions with similar goals.

In this talk, we outline our construction and the main technical ideas used in our work. We supplement
this by an overview of Intel SGX and describe how we implemented a prototype of our anonymous
communication infrastructure using SGX.

Our goal is to get an efficient protocol for anonymous identification that also supports an interesting class of statements. An NP relation R where R(x,w) = true for statement x and witness w is the ultimate goal for so called proof systems used for anonymous identification. There is a zoo of proof systems: zero-knowledge proofs of knowledge, witness indistinguishable proofs, non-interactive proofs, SNARK, subversion-resistant proofs, ZAPS, and so on.

ZAPs are non-interactive witness-indistinguishable proofs without any setup. This is a big step towards practical applications, since the often used proofs system like Groth-Sahai proofs (GS-proofs) are only secure under a trusted setup.

We present the recent work by Fuchsbauer and Orrù "Non-interactive zaps of knowledge" and use this as a starting point give an overview of existing proof systems, their features and varieties. Fuchsbauer and Orrù show the first ZAP that satisfies the strong soundness definition called "argument of knowledge". An argument of knowledge proves that not only the statement is true, but also that the prover knows a witness for it.

The classical K-means problem is solved by minimizing a sum-of-squares error term. This objective function does not encompass any structural information about the solution. It is easy to formulate criteria, such that solutions which are structurally similar to an optimal solution have an almost optimal objective value. However, the reverse direction is not clear. In this talk we have a look at a short string of work by Marina Meilă in which she formulates sufficient conditions, such that a solution with small enough cost is structurally similar to an optimal solution. These conditions are useful in practice, since they can be evaluated using only the data set and a single solution, without any additional assumptions.

Vorträge WS 2017-18

I present the recent result that RSA moduli generated by a popular cryptolib are insecure. The attack on these RSA moduli is based on 20 year old results by Coppersmith and others. In the talk I will use Coppersmith’s method as  a black box and will concentrate on the basics of the attack and the techniques to achieve a practical attack.

Sharing untrusted memory among multiple users is rather common. We discuss some security guarantees that can be achieved on such memory, particularly, the notion of fork-linearizability. Fork-linearizability does not prevent dishonest behaviour of untrusted memory, but makes it easily observable. The notion of fork-linearizability can be achieved through various means, for example blockchains, and can be applied to a lot of application scenarios, like an authenticated SVN repository or allowing multiple data sources to jointly maintain an authenticated data structure in a generic way.

In this talk we are going to explore a semi-arbitrary collection of useful algebraic tricks for the construction of cryptographic schemes (in particular, but not limited to, signature schemes). 

For this, we are going to take a look at three topics:
(1) Waters signatures/IBE and a programmable hash function
(2) Pointcheval-Sanders signatures based on a new non-interactive assumption (improvement over the original paper)
(3) Signatures on vector spaces and their applications to network coding 

These schemes have in common that their proofs provide the "ah, okay, this term just vanishes" effect. We are going to try to highlight what algebraic design decisions lead to those convenient "coincidences"; hopefully supplying you with a small toolbox of tricks for your future algebraic crypto constructions.

Signatures of Knowledge (SoK) are used in many cryptographic systems like Reputation Systems, Group Signatures and Ring Signatures. We know how to build them using the so called Fiat-Shamir transformation in the Random-Oracle model. There is also a UC secure construction in the standard model utilising public-key encryption. In the talk we will present the work by Groth and Maller. They introduce a construction of SoKs from simulation-extractable succinct non-interactive arguments of knowledge (SE-SNARK). Their construction yields a SoK consisting of only 3 group elements and they prove that this is optimal for pairing-based SoKs.

We will concentrate on the construction of the SoK given a SE-SNARK and highlight their relation. A part of this talk will be the introduction of simulation-extractability.

Recently, Derler and Slamanig introduced Key-Homomorphic Signatures. Such Signatures are a generalization of larger classes of existing signature schemes and can be used to build other interesting primitives (ring signatures, multisignatures, simulation sound extractable arguments of knowledge, and others) in a simple and elegant way. In this talk we will present different formal definitions of Key-Homomorphic Signatures and discuss the application to arguments of knowledge in detail.

There is a vast number of theoretical results that seems to indicate that clustering is extremely difficult. For instance, we know that the k-means problem is APX-hard and there is even an "Impossibility Theorem for Clustering". However,"clustering" cannot be defined precisely; it's in the eye of the beholder. Which is bascially why there are so many different clustering algorithms, clustering "problems", and no general clustering theory.  Besides that, in practice, we observe that simple heuristics (often) find meaningful results. 

For instance, the problem of finding a clustering that minimizes the k-means cost is a meaningful clustering problem if and only if it is worth minimizing the k-means cost. In other words, in order to define a practical k-means clustering problem, we have to incorporate some bias, i.e. domain knowledge. This aspect is not taken into account in the traditional worst-case analysis. It is still an open problem how to describe meaningful k-means instances and how fast the k-means problem can be solved with respect to these instances. 

There is a growing amount of literature that deals with this idea and shows that there are provably accurate and efficient algorithms for clustering problems that, according to traditional worst-case analysis, should be difficult.  In this talk, I'll give an overview of some of this literature. 

Solving problems on graphs is the heart of theoretical computer science (admittedly, this is an overly dramatic statement, but you get the drift ;)). However, a whole lot of interesting and practically relevant graph problems turn out to be NP-complete. Naturally, one might ask the question: can we restrict the input space to allow for efficient algorithms?
In parameterized complexity theory, we detach certain parts of the input (the parameter) from the size of the given instance. The goal is to find efficient algorithms, assuming the parameter is some small constant.
In this talk, we discuss different, clearly quantifiable, parameterizations of well-known graph problems. We further show negative and positive results of these parameterization in the context of parameterized complexity. If time permits we discuss applications of these techniques to certain graph variants of clustering problems.

Vorträge SS 2017

Companies gathering huge amounts of information sometimes release small snapshots of their data, either to promote research or posing some sort of "challenge" resulting in monetary gain.
Time and time again such companies have impressed us with their disability to anonymize these datasets.
But are we actually able to make sure that a dataset is anonymized?
Or rather, how do we determine "how much" anonymization is necessary?

In her seminal work, Cynthia Dwork defined the field of Differential Privacy, with the goal to formalize the risk of including sensitive information into publicly available datasets. In this talk, we explore the formal fundamentals of Differential Privacy and present basic tools (quantifiably) anonymizing datasets.

The Generic Group Model (GGM) is often used to analyze new cryptographic hardness assumption and/or primitives. In the GGM algorithms are only allowed to evaluate group operations using designated oracles; adversaries cannot make use the representation of the groups used in practice. To overcome this limitation, Kiltz and Loss recently proposed the Algebraic Group Model (AGM). In this model, an adversary A is modeled as "algebraic": whenever A outputs some group element Z, it also has to output a representation of Z with respect to all group elements given to A so far. Using algebraic algorithms, Kiltz and Loss prove the equivalence of the Discrete Logarithm Problem to the Diffie-Hellman Problem, and related problems. That means, every algorithm against Diffie-Hellman-related problems has to solve the Discrete Logarithm Problem directly, unless it relies on inherently non-algebraic operations.
In this talk we will give a short introduction to the AGM, and we will show how to analyze new hardness assumption in this model.

Attribute-based signature (ABS) schemes and a first experiment-basedsecurity definition were introduced by Maji, Prabhakaran and Rosulek [1].The concept of attribute-based signatures considers several signers and an authority that issues secret keys to them. The secret keys encode a set of attributes.Attribute-based signatures are generated on a message and a policy, for example aBoolean formula over the attributes. To generate a valid signaturea signer has to possess a secret key where the encoded attributessatisfy the given policy. A receiver of an attribute-based signatureis able to verify whether it was generated by a signer that has satisfyingattributes for the given policy. The validity of a signature is thereforenot bound to a single signer's identity but rather to a group of signerswith satisfying attributes.

A secure ABS is defined as unforgeable and perfectly private. Unforgeabilitycaptures that a valid signature on a message and policy should onlybe generated by a single signer whose secret key encodes attributesthat satisfy the policy and not by colluding signers. Prefect privacycaptures that a signature is independent of the secret key used togenerate it. This means that a signature itself tells a verifier nothingabout the identity of the signer or the concrete attributes used togenerate the signature.

The generic construction presented in [1] is based on digital signatures,non-interactive witness indistinguishable proofs, and monotone span programs.We will present a simplified version of the construction in respect to a stronger security model.

[1] Maji, H.K., Prabhakaran, M., Rosulek, M.: Attribute-based signatures. CT-RSA 2011. LNCS, vol. 6558. Springer (2011)

For NP-languages, we know that there are polynomial-size witnesses for every word that allow a polynomial-time verifier to convince himself of language membership. The question we are concerned with is whether we can make witnesses shorter.

Succinct non-interactive arguments (SNARGs) promise witnesses of sublinear length. For SNARGs, we weaken the strong notion of a proof (there are no proofs for false statements) to arguments (there may exist arguments for false statements, but they are hard to find for any polynomial-time bounded adversary). There are several constructions of SNARGs for (probably) hard languages, but none of them are proven secure under any of the standard hardness assumptions in cryptography. 

The lack of such constructions is explained by Gentry and Wichs [1]. They showed that no SNARG can be proven secure with a black-box reduction to any falsifiable assumption. Effectively, this rules out SNARGs under standard assumptions with non-revolutionary security proof techniques. In the talk, we are going to present their proof and discuss implications.

[1] Gentry, Craig, and Daniel Wichs. "Separating succinct non-interactive arguments from all falsifiable assumptions." Proceedings of the forty-third annual ACM symposium on Theory of computing. ACM, 2011.

Bloom filters have found widespread use for answering set (non-)membership queries in a variety of application types. Examples include spell checking, cache management for network caches, and recognizing malicious URLs. Due to the use of Bloom filters in security critical applications, the quersion arises how Bloom filters affect the applications' security. In the talk, we will present Bloom filters and potential attacks on them.

Vorträge SS 2016 - WS 2017

We introduce a new, simple and non-interactive complexity assumption for cryptographic hash functions, which seems very reasonable for standard functions like SHA-3. We describe how this assumption can be leveraged to obtain standard-model constructions that previously seemed to require a programmable random oracle: a generic construction of identity-based key encapsulation (ID-KEM) with full adaptive security from a scheme with very weak security (“selective and non-adaptive chosen-ID security”), a similar generic construction for digital signatures, and the first constructions of ID-KEMs and signatures over bilinear groups, where a ciphertext or signature consists of only a single group element and which achieve full adaptive security without random oracles.

Reducing latency overhead while maintaining critical security guarantees like forward secrecy has become a major design goal for key exchange (KE) protocols, both in academia and industry. Of particular interest in this regard are 0-RTT protocols, a class of KE protocols which allow a client to send cryptographically protected payload in zero round-trip time (0-RTT) along with the very first KE protocol message, thereby minimizing latency. Prominent examples are Google's QUIC protocol and the upcoming TLS protocol version~1.3.

Intrinsically, the main challenge in a 0-RTT key exchange is to achieve forward secrecy and security against replay attacks for the very first payload message sent in the protocol. According to cryptographic folklore, it is impossible to achieve forward secrecy for this message, because the session key used to protect it must depend on a non-ephemeral secret of the receiver. If this secret is later leaked to an attacker, it should intuitively be possible for the attacker to compute the session key by performing the same computations as the receiver in the actual session.

In this paper we show that this belief is actually false. We construct the first 0-RTT key exchange protocol which provides full forward secrecy for all transmitted payload messages and is automatically resilient to replay attacks. In our construction we leverage a puncturable key encapsulation scheme which permits each ciphertext to only be decrypted once. Fundamentally, this is achieved by evolving the secret key after each decryption operation, but without modifying the corresponding public key or relying on shared state.

Our construction can be seen as an application of the puncturable encryption idea of Green and Miers (Oakland 2015). We provide a new generic construction of this tool that can be instantiated with any selectively secure hierarchical identity-based key encapsulation scheme.

Mit Hilfe von Gruppensignaturen können Mitglieder einer Gruppe anonym
Nachrichten signieren. Lediglich eine spezielle Partei, der
Gruppenmanager, ist in der Lage, die Identität eines Signierers aus
einer gültigen Signatur zu extrahieren. Somit ist auch nur der
Gruppenmanager fähig zu entscheiden, ob zwei Signaturen vom selben
Signierer erstellt wurden. Diese Fähigkeit wird in praktischen
Applikationen jedoch häufig benötigt, weswegen verschiedene Techniken
entwickelt wurden, welche eine Verknüpfbarkeit von Signaturen ermöglichen.
In diesem Vortrag werden verschiedene Verknüpfungstechniken vorgestellt,
welche nicht nur für Gruppensignaturen relevant sind. Des Weiteren
werden Auswirkungen auf entsprechende Sicherheitsdefinitionen diskutiert.

We present and discuss recent results which show that under appropriate cryptographic
assumptions Nash equilibria cannot be computed in polynomial time. More generally, the
results show that the class PPAD is not contained in P. The cryptographic assumptions include the
existence of one-way functions and of certain obfuscation schemes.

In anonymous credential systems, there are authorities, users, and verifiers. Users approach an authority, who issues them a credential. A credential certifies a user certain rights (or a certain level of trust), which can be checked by a verifier. These systems are anonymous because the user does not reveal his exact identity to the verifier. Instead, the verifier only learns the identity of the authority who issued the user his credential (and if the verifier trusts this authority's judgement, it will allow the user access).

A natural extension of this concept is that the user is not necessarily granted his rights directly by the authority, but indirectly through a chain of trust. This is realized by delegatable credential systems. That is, the actual authority can issue a (level 1) credential to some other party, enabling that party to issue (level 2) credentials in the authority's name. The second party may further delegate its credential to a third party, enabling it to issue level 3 credentials, etc. This is similar to how TLS certificate chains work. In contrast to TLS certificates, however, a verifier does not learn the specific delegation chain as it may restrict the user's anonymity unncessarily. For example, if the delegation chain is [Telekom](root) -> [DFN] -> [Paderborn University] -> [Specific Student], then all involved parties except the Telekom are hidden from verifiers as any one of them would unnecessarily restrict the anonymity of the authenticating student.

In the talk, I am going to briefly present the ideas behind state-of-the-art delegatable credential systems. Then we are going to discuss attribute-based delegatable credential systems, which allow very fine-grained management and delegation of access rights. 

Semantische Sicherheit (SS) ist eine intuitive Definition der Sicherheitseigenschaften von Verschlüsselungsverfahren. Während im Falle von herkömmlichen PKE Verfahren Semantische Sicherheit (in unterschiedlichsten Variationen) ist äquivalent zu der einfacheren Ununterscheidbarkeitsdefinition (IND) [1], dies ist nicht der Fall zum Beispiel im Kontext von Prädikatbasierten Verschlüsselungsverfahren (PEs).

Nichtdestotrotz fand die Ununterscheidbarkeitsdefinition eine weite Verbreitung bei PEs während es für die  Semantische Sicherheit Unmöglichkeitsergebnisse sogar für die einfachsten und natürlichen Prädikate existieren [2,3,4]. Für die gleichen Prädikate wurde aber auch die Äquivalenz der Ununterscheidbarkeit und der Semantischen Sicherheit gezeigt [5]. Wie ist das möglich?

In diesem Vortrag gehen wir dieser Frage auf den Grund, betrachten verschiedene Varianten die Semantische Sicherheit im Kontext von PEs zu definieren, und zeigen ab wann die Äquivalenz zwischen SS und IND verletzt wird.

[1] The Foundations of Cryptography - Volume 2, Basic Applications. Cambridge University Press 2004.

[2] Boneh, D., Sahai, A., Waters, B.: Functional encryption: Definitions and challenges. In: Ishai, Y. (ed.) Theory of Cryptography - 8th Theory of Cryptography Conference, TCC 2011. LNCS, vol. 6597, pp. 253–273. Springer (2011).

[3] Bellare, M., O’Neill, A.: Semantically-secure functional encryption: Possibility results, impossibility results and the quest for a general definition. In: Abdalla, M., Nita-Rotaru, C., Dahab, R. (eds.) Cryptology and Network Security - 12th International Conference, CANS 2013. LNCS, vol. 8257, pp. 218–234. Springer (2013),

[4] Barbosa, M., Farshim, P.: On the semantic security of functional encryption schemes.

In: Kurosawa, K., Hanaoka, G. (eds.) Public-Key Cryptography - PKC 2013 - 16th International Conference on Practice and Theory in Public-Key Cryptography. LNCS, vol. 7778, pp. 143–161. Springer (2013)

[5] Attrapadung, N., Cui, Y., Galindo, D., Hanaoka, G., Hasuo, I., Imai, H., Matsuura, K., Yang, P., Zhang, R.: Relations among notions of security for identity based encryption schemes. In: Correa, J.R., Hevia, A., Kiwi, M.A. (eds.) LATIN 2006: Theoretical Informatics, 7th Latin American Symposium. LNCS, vol. 3887, pp. 130–141. Springer (2006).

Attribute-based Signatures were first introduced as a natural extension to Digital Signatures, where the messages are interpreted as attributes from a pre-defined set.
Over the years a much stronger and more general model of Attribute-bases Signatures has been developed. Today the term Attribute-based Signatures describes signatures that are computed on messages and attributes, where attributes are separated from messages. In Detail, the attributes are encoded in the signing key and a signature is valid only if the attributes fulfil a pre-defined policy (e.g., boolean formula).
In this talk we will compare security models and present a concrete scheme. 

(The talk will be held in German)

Searchable encryption is a technology that encrypts data in a way that allows ciphertexts to be searched. We ask how searchable encryption can be achieved in scenarios with multiple users involved, i.e. data sharing. Scenarios where not all users are authorized to access all data are of particular interest to us, because such scenarios ask for means of access control.

The talk will give an introduction to searchable symmetric encryption and how it can serve as a foundation for searchable encryption with access control.

Bei der Regressionsanalyse geht es grob gesagt darum, Abhängigkeiten zwischen Variablen zu modellieren. Der einfachste Ansatz hierfür ist die einfache lineare Regression, bei der ein explizites Modell bestimmt wird. Sieht man diesen Ansatz im Kontext der Bayesschen Inferenz, wird daraus die sogenannte Bayessche lineare Regression, bei der, grob gesagt, nicht mehr ein einzelnes Modell sondern eine Verteilung über Modelle bestimmt wird, die angibt, wie vernünftig einzelne Modelle die Abhängigkeit jeweils beschreiben. Betrachtet man diesen letzten Ansatz genauer, stellt man fest, dass er auch abstrakter, ohne die Idee von konkreten Modellen, beschrieben werden kann.  Das funktioniert mit Hilfe multivariater Gaußverteilungen bzw. deren Verallgemeinerung, den sogenannten Gaußprozessen. Im Vortrag wird ein Überblick über die genannten Verfahren gegeben.

(The talk will be held in German)

K-means is one of the most widely used clustering objectives. Its most used local search heuristic, the K-means algorithm, is known to have exponential worst-case running time and to produce arbitrarily bad solutions. Kanungo et al. proved that an alternative, swap-based, local search approach yields an O(1) approximation to the K-means problem.

In this talk, we present the basics of the class PLS and the complexity classification of local search problems. We show that swap-based local search for K-Means and similar problems is so-called tightly PLS-complete, which means that these algorithms also have exponential worst-case running time.

(The talk will be held in German)

Spektrale Clustering Methoden sind in der Lage, Muster zu erkennen, an denen die traditionellen Clustering-Algorithmen, wie z.B. der  K-Means Algorithmus, kläglich scheitern würden. Darüber hinaus sind diese Methoden sehr einfach zu implementieren. Der einzige  große Wermutstropfen ist die hohe Laufzeit. Nichtsdestotrotz werden mit Hilfe von spektralen Clustering Methoden hochqualitative Ergebnisse auf (kleinen) Datensätzen erreicht. 

Warum spektrale Algorithmen überhaupt irgendetwas sinnvolles berechnen, ist erstmal nicht offensichtlich. Darum beschäftigen wir uns im ersten Teil des Vortrags mit einigen grundlegenden Eigenschaften, die das Spektrum spezieller Matrizen mit Clusterings verbinden. Im zweiten Teil des Vortrags geht es dann um ein neues Ergebnis [1], welches besagt, dass ein spezieller spektraler Algorithmus für das Min-Max-Conductance Clustering Problem (auf speziellen Instanzen) tatsächlich (beweisbar) gut funktioniert. 

[1] Peng R., Sun H., Zanetti L.: Partitioning Well-Clustered Graphs: Spectral Clustering Works!, COLT 2015

Die Laufzeit des K-Means Algorithmus ist aufgrund seiner hohen praktischen Relevanz immer wieder im Fokus der theoretischen Forschung. Im ersten Teil dieses Vortrags werden wir uns mit einer grundlegenden Idee beschäftigen, die verwendet wurde um zu zeigen, dass die Laufzeit von K-Means schon in zwei Dimensionen exponentiell in der Anzahl Punkte sein kann.

Trotz dieses extrem schlechten Worst Case Verhalten, terminiert der Algorithmus in der Praxis meist recht schnell. Ein Versuch diesen Effekt zu erklären ist die geglättete Analyse. Im zweiten Teil dieses Vortrags beschäftigen wir uns mit den Grundlagen der geglätteten Analyse, und betrachten eine einfache Analyse für den K-Means Algorithmus auf hochdimensionalen Datensätzen.

In vielen kryptographischen Systemen werden Signaturen mit Nicht-Interaktiven Zero-Knowledge Beweisen (NIZK) kombiniert. Für Sicherheitsbeweise wird dabei häufig auf das Random-Oracle-Modell zurück gegriffen. Ein Grund dafür ist, dass es bisher kein im Standardmodell sicheres NIZK-Verfahren gibt, welches mit klassischen digitalen Signaturen kombiniert werden kann.

Mit Structure-Preserving Signatures (SPS) wurden jedoch digitale Signaturen definiert, welche spezielle algebraische Eigenschaften aufweisen. Diese Eigenschaften ermöglichen eine Kombination der Signaturen mit Groth-Sahai-NIZK Beweisen (GS-NIZK), welche im Common Reference String Modell (CRS) sicher sind.

Im Oberseminar sollen SPS und die Verbindung mit GS-NIZK demonstriert und diskutiert werden.

A sequence of data accesses can provide useful information on the data that is being accessed. If a cloud storage provider sees a couple of files being accessed at about the same time and this pattern is observed several times, with high probability there is a strong relation between the these files' contents. The cloud storage provider can come up with such a conclusion without knowledge of the contents, i. e. even if the files are encrypted.

Ensuring confidentiality is the reason for encrypting files but the above example shows how the context access to an encrypted file---the access pattern---can pose a breach of confidentiality. Oblivious RAM addresses the issue of information leakage through access patterns by completely hiding the access pattern.

In the seminar, we will present a lower bound on the costs of oblivious RAM, as well as techniques for constructing oblivious RAM.

Short signatures impact several applications like Credential Systems, Group Signatures, Reputation Systems and Identification Protocols. Therefore, a new direction in this area is to define signature schemes in the type 3 bilinear group setting. The type 3 setting describes bilinear groups with efficient exponentiation, hashing and pairing. Regarding security and future development of computational power one should only consider the type 3 setting.

Sequential Aggregate Signatures are used in secure route attestation and certificate chains. For example a backbone router wants to advertise the routes which should be used to reach his network. Each router on a routing path adds its signature to a valid route attestation before sending it to its neighbors. Deployed network protocols limit the size of route attestation. Therefore we use aggregate signatures to reduce the overhead of multiple signatures. Interestingly, aggregate signatures can in general be used to shorten certificatie chains.

In the seminar, we will present the mentioned theoretical construction and take a look at the advantages of the approaches.

Verifiable computation is concerned with the following scenario: Some computationally weak unit V wants to delegate an (expensive) computation of f(x) to an untrusted third party P. After receiving the alleged result y, the weak party V wants to ensure that indeed, y = f(x), without having to redo the computation itself. To facilitate this, P supplies V with a short and efficiently checkable "proof" that y = f(x).

Interestingly, this scenario can be realized for arbitary computable functions f such that proof verification is essentially polylogarithmic in the length of the original computation. Unfortunately, while theoretically interesting, this construction remains far too computationally expensive for practical use. In recent years, cryptography research has been working towards practical verifiable computation.

In the seminar, we will present the mentioned theoretical construction and take a look at modern approaches.