A delegation of the Aarhus Crypto Group (Bernardo David, Tore Frederiksen, Claudio Orlandi, Tomas Toft and Roberto Trifiletti) is attending the Workshop on Applied Multi-Party Computation and will attempt to cover this event on the MPCLounge.
We decided that timeliness is more important than perfection, and we therefore apologize for any mistake in these posts!
This is the second of a 4 post series.
First Session: Panel on Theory vs Practice
The first panel of the workshop, titled “Theory vs. Practice of MPC” featured Yehuda Lindell, David Evans and Payman Mohassel as session chair. Three main topics were brought up, listed below
– How do we increase collaboration between different related communities outside crypto?
Yehuda started out expressing that the theory and practice communities within crypto work pretty well together. However we cannot expect MPC research from other communities. But compiler optimization, hardware enhancements (AES-NI) are welcome.
There was discussions on the probability on getting more cryptographic primitives embedded in standard hardware, but the overall consensus was that this will probably not happen anytime soon. However specialized hardware is being made for e.g. FHE.
Next the discussion turned to trusted hardware and it was noted that trusted hardware in 2014 probably shouldn’t be trusted blindly given recent events.
David made the remark that to get people to use and push MPC, they must first understand it, and hence optimize systems for it. What we need is a killer application and driver for MPC: Not something that people are already doing today, to which we can add privacy using MPC. But something that no one is willing to do today, but that people would be willing to do using MPC.
– How do we evaluate efficiency of MPC and security of MPC?
In general the panel members agreed that the way efficiency of cryptographic protocols is measured in our field is extremely messy. Everyone uses their own implementations of basic primitives and runs their experiments in very different settings. All this leads to non-refutability of results since no one can rerun the experiments. Also it is very hard to compared results across papers.
Finally the discussion turned to the security models. In crypto we’re often black/white, either secure or insecure. However in the real world it was argued that for a lot of problems it’s often considered ok to leak some information.
– Implementing MPC protocols is time-consuming. Can we do anything to remedy this?
A lot of people implement their protocols from scratch, including all the primitives they utilize. This is often time consuming and ends up in hacky code only readable to the author. Yehuda advocates the library SCAPI which gives the author access to a lot of primitives which can be used as building blocks for the actual protocol. David states that it’s SCAPI is a valuable tool, “If you are implementing a Yao protocol, you don’t want to spend time implementing OT”.
Finally the importance of implementing your protocol was underlined. A lot of performance issues only become apparent when you implement stuff: there are tons of factors that need to be considered, such as bandwidth, network latency, round complexity, crypto operations and plaintext operations. We’ve reached a time where crypto in many cases isn’t the bottleneck of performance anymore.
Second Session: Garbled Circuits
fleXOR: flexible garbling for XOR gates that beats free-XOR
FleXOR is an optimization of the free XOR approach for garbled circuits. The free XOR approach [KS08] makes it possible to evaluate XOR gates in a garbled circuit for free (without any communication or cryptographic computations). The free XOR approach is compatible with the (small) garbled row reduction which reduces the amount of cipher texts in a non-XOR garbled gate from 4 to 3. However, it is not compatible with the (large) garbled row reduction which reduces the number of cipher texts to 2. Furthermore, the free XOR approach requires a quite strong security assumption (with a circular security flavor). FleXOR on the other hand makes it possible to use a weaker assumption (only “correlation robustness”) and use the large row reduction which only requires 2 cipher texts per gate. The price is that not all XOR gates can be done for free, but in general the average amount of cipher texts for a garbled circuit ends up being less with fleXOR than with free XOR.
The fleXOR construction is based on not having a single global difference in the garbled circuit (which is the case for free XOR), but several differences (still less than the amount of gates). This makes it possible to only need cipher texts for garbled XOR gates sometimes. Exactly when and how many depends on the circuit structure.
Zero-knowledge from garbled circuits and garbled circuits for zero-knowledge
Claudio shows how to do actively secure zero knowledge proof of non-algebraic statements (which has previously been quite a complex task) using a passively secure version of Yao’s garbled circuits. To see how this is done first notice that zero knowledge is a proper subset of MPC, that is, as a two party protocol with a prover having an element x and a witness w which can be used to verify that x is in a specific language efficiently, and a verifier without any input. Thus the problem can be expressed as a function f_x(w)=true if w is a witness for x and false otherwise. The protocol consists of having the verifier construct a garbled circuit computing f_x which he sends to the prover. The prover is then given his inputs using an actively secure OT protocol. The prover then evaluates the garbled circuit and in the end constructs a commitment to the output (a key representing either true or false) which he sends to the verifier. The verifier then sends the randomness used to construct the garbled circuit to the prover who checks that the verifier has been honest, if so, he opens the commitment and the verifier learns whether or not the prover actually had a witness for x, but nothing more.
Finally Claudio talked about some work in progress: in zero-knowledge the prover knows ALL the input bits and therefore all the values on the internal values of the circuit. Standard Yao gates have oblivious evaluation because the evaluate does not know the values on the internal wires. Therefore for the zero-knowledge case we can use a simpler garbling scheme, that only guarantees soundness (no privacy): in particular in these garbling schemes only 1 ciphertext per gate is needed (2 if one wants free-XOR). Claudio finally noted that the fleXOR technique for the previous talk seems to have straightforward application to garbled circuits without privacy as well.
Efficient garbling form a fixed key blockcipher
Viet Tung Hoang
Viet points out an attack on the free XOR approach when using the garbling scheme from [PSSW09]: It turns out that the keys on both the left and right wire entering a gate can end up being the same, in which case it is possible to learn the global difference of the garbled circuit by XOR’ing two entries of the garbled computation table. This occurs since the structure of the circuit structure can yield an unfortunate symmetry (or if the same key is directly used in both inputs to a garbled gate). With this in mind Viet introduces a way to garble a gate using fixed key AES which does not have this problem but is still highly efficient. The idea is to view the keys as elements in GF2^k and use a constant multiplication of each of the keys to break the symmetry. Questions were asked about how secure AES is in practice when used on a fixed key, and this seems to be one of the questions where the MPC community needs to interact with other crypto sub-communities (in this case, experts in symmetric crypto) to figure out what to do.
Session Three: Applications
Secure Collaborative Statistics in Credit Rating Using Linear Programming
A farmer runs into a bank… it’s not the beginning of a joke, but the beginning of the talk by Tomas Toft (Aarhus University). Tomas started his talk with the following motivating examples: Danish dairy farmers need loans to support their companies, and the banks want to assess how well the farmers are doing to find out what is the probability that the farmers will go bankrupt and not pay the loan back. Big banks have many clients, so they have a big sample to compare with. But small banks do not, and they would like to have access to more data to jointly credit-rate farmers. This is a classic MPC scenario, where each bank inputs his database of clients, and the new farmer can input his data and figure out his ranking and nothing else. The algorithmic tool to do this is Data Envelopment Analysis (DEA), and Tomas argues that DEA translates easily to linear programming.
The underlying protocol is SPDZ for additions and multiplications, together with comparison by Lipmaa-Toft (FC’13) that can compare n-bit values with O(log n) multiplications. A Java implementation running on Amazon cloud shows that this computation for typical inputs can be solved in slightly more than 3 minutes (or 5 seconds allowing some extra leakage) after the preprocessing.
Standard Network Flow problems with Secure Multiparty Computation
Abdel (from the center for operations research and econometrics, UC Louvain) started his talk by confessing us that he is an engineer and loves applications, such as benchmarking, problems related to flow on networks, supply chain management, operational research etc.
During his talk Abdel showed how to take algorithms to solve the aforementioned problems, and implement them securely using MPC. To abstract from the underlying protocols, Abdel uses the arithmetic black-box abstraction. Part of the effort to translate standard algorithms in “MPC-safe” algorithms: when running a secure computation, loops with variable number of loops and branching (if… then…) are a big no no. This transformation makes the original algorithms (that already have quite high complexity) slower by a (multiplicative) factor n^2. Experiments shows that the cost of these algorithms in MPC is quite high (however, the implementations are done using the by now obsolete VIFF framework).
MPC in Large Networks (with applications to anonymous broadcast)
Mahdi motivates his talk using examples from the real world, in particular the growth of modern networks such as BitTorrent, Facebook, Twitter or Bitcoin. How to run MPC with so many parties?
Their setting is N parties (for very big N), with at most N/10 (static, active) corruptions. While it is known that in the setting of honest majority it is possible to construct information-theoretic protocols, Mahdi argues that one can gain in efficiency by incorporating computational aspects in their protocols. In particular, using homomorphic encryption in some steps of the protocol it is possible to reduce the number of rounds and the communication complexity.
Their protocol builds up on many tools, such as quorum based MPC, VSS, threshold FHE, SPDZ etc. The main idea behind the protocol seems to be to assign each gate to a small set of parties, where they can evaluate it and then they send it to the next set of parties. This reduces the communication, and due to the low number of corrupted parties and the quorum construction, one can make sure that each small subset of parties contains many honest parties.
An example application for their protocol is anonymous broadcast (or the dining cryptography problem).
This was the last talk of the first day, and by now my jetlag really kicked in, so these notes will not make justice to Koki Hamada (NTT) very interesting work.
MEVAL is a very efficient framework for 3-party secure computation, that tolerates at most 1 passive corruption (same as sugarbeet auction/Sharemind), but they chose to work modulo the Mersenne prime 2^61-1. MEVAL claims high performances (8,7 mips multipliactions, 6.9 seconds for sorting 1 million values) and can be used in the client-server model, where a number of clients share their data among 3 servers who run the computation and return an output. MEVAL implements basic MPC functionalities and statistical functions. They reported on joint experiments, with medial study group, university hospital and statistics bureau.
Under the hood of MEVAL there are many interesting optimizations, including asynchronous processing, pseudorandom secret sharing, and careful choice of the finite field to perform computation on. Finally MEVAL contains a number optimized protocols for higher level primitices, such as bit decomposition for small values, sorting protocol etc.