Se­cure Mul­ti­party Com­pu­ta­tion

Multiparty computation (MPC) is about processes which enable multiple parties to jointly compute some output based on private inputs they own. For security, each party wants its private input to remain secret throughout the computation.

For example, think of a blind auction for an item where each party has a private bid as input which it wants to keep private. The desired output is the highest bidder. In reality, often a central third party acts as auctioneer and takes all bids to determine the winning party. This auctioneer can leak all the bids and thereby poses a threat to each participant. The central question is: How can the parties determine the winner without revealing the bids to anyone?

This is where cryptography and the notion of secure MPC come into play. Secure MPC protocols realize such joint computations without relying on a trusted party while only revealing the desired results. For example, all bidders would interact via some underlying P2P-network, perform the auction on suitably "masked" bids, and eventually only "unmask" the winning bidder's index i and its bid B. Suitably hiding the intermediate results ensures that parties learn nothing more than just the output.

In cryptography, we formalize such security requirements like "party A learns nothing more than information x" via the so-called simulation paradigm. Intuitively, if the messages A obtains during a protocol execution can be generated solely using x, then these messages contain no information beyond x. Formally, we require that for party A there exists an efficient simulator which given the ideal information x generates messages which are indistinguishable from A's real protocol views.

Due to its very general formulation and formalization, secure MPC can be employed in a huge range of situations and the following additional examples shall give you a better intuition:

  • Aggregating statistics from different data sources (e.g. hospitals and insurances): As input each party i has a private database with tuples (ID, value-i). As output the parties aggregate the values, for example, into their sum. This is done without generating a central database where all values are joined by their ID. Generating such a central dataset sometimes is even legally forbidden.
  • Train machine learning (ML) models based on private data (e.g. train word predictions from keyboard data): Each party i steadily generates confidential input data, e.g. by typing on her smartphone. The output should be a predictive ML model where the confidential input data were not revealed to anyone.
  • Evaluating ML models: One party has a trained ML model as input while another party wants to apply it to her private input. For example, latter party wants a categorization on a private image. Neither the model nor the data it was applied to should be revealed.
  • Jointly setting up a cryptographic system: The parties have no input but want to generate parameters of a cryptographic system as output. This is, for example, reasonable in order to prevent parties from backdooring system's parameters and impedes a single point of failure.
  • ...

One of our main research interests is efficient MPC. While there are protocols for securely computing arbitrary efficient functions, these are often too slow for practical applications like training ML models or computing stable matchings. The challenge is to design fast but secure MPC protocols which are tailored towards the application at hand. Also, we consider rational incentives of participants when running MPC protocols. For example, for a secure computation of a (second-price) auction, rational participants have an incentive to enter their true bids. This can be useful when designing protocols and is described in more details in rational cryptography.