A delegation of the Aarhus Crypto Group (Bernardo David, Tore Frederiksen, Claudio Orlandi, Tomas Toft and Roberto Trifiletti) is attending the Workshop on Applied Multi-Party Computation and will attempt to cover this event on the MPCLounge.
We decided that timeliness is more important than perfection, and we therefore apologize for any mistake in these posts!
This is the first of a 4 post series.
First Session: Invited Talk
Efficient Two-Party Secure Computation for Semi-Honest and Malicious Adversaries
Being the first speaker of the workshop, Yehuda started with an brief introduction to two-party computation (definitions, applications, etc.). In particular, Yehuda advocated for the need for highly efficient protocols for semi-honest security: he claimed that in some practical applications (for instance two hospitals trying to compute some joint statistics on their patient records) the parties are not really afraid of the other party trying to cheat to learn more, but more that if the other party gets broken into by an external attacker, then the other party’s data should not be compromised (and semi-honest guarantees this).
Yehuda went on talking describing how “in the old days” the community did not believe that generic protocols (like Yao or GMW) would be competitive against special purpose protocols, but this changed in the last few years. From a “theory vs. practice” perspective, it is fascinating to see how Yao and GMW were meant to be pure feasability results back in the 80s, and nowadays they are the best protocols we have.
Then Yehuda talked about the “GMW comeback”. Until few years ago, Yao was considered more efficient than GMW because GMW requires an OT for each gate, and OT requires exponentiation. But thanks to OT extension, things are changing. OT extension is the analogue of hybrid encrytpion, and was first proved possibly by Beaver (as a feasibility result) and then by Ishai et al. efficiently (with an amortized complexity of only 3 hash per OT).
However, when implementing protocols, we find out that bottlenecks are not only cryptographic: OT extension requires big matrix transposition and lots of communication — cryptographers usually do not count the complexity of this operations, but it turns out we should. Yehuda joked about the fact that when he was “young”, efficiency meant only less cryptographic operations. But today we have faster cryptography (think of AES-NI) and very efficient protocols, so other bottlenecks arise and one needs to go back to the drawing board. This is the main motivation behind his work at CCS’13 on OT-extension (coauthored with Asharov, Schneider and Zohner) where an optimized (semi-honest) OT extension protocol is presented, that can be used to run GMW on a circuit with 1 million gates in 0.5s. The take-home message: try to understand “real world computation” and optimize your protocols based on this.
Then Yehuda went on talking about the history of the Yao cut-and-choose approach for active security. In a work from EUROCRYPT 2007 (with Pinkas), an active secure protocol was proposed. To get (statistical) security 2^(-40), a prohibitive replication factor of 680 was needed. This went down to 128 first and to only 40 using the “forge-and-lose” technique (CRYPTO 2013).
Finally Yehuda talked announced a new result where the replication factor can go all the way down to 4, if one is evaluating the same circuit (possibly on different inputs) over and over again. The idea (at least to me) is very reminiscent of the “LEGO cut-and-choose” approach. The same circuit is garbled many times, cut-and-choose is done and now the circuits are put in small buckets. Say there are N buckets, the size of each bucket only needs to be O(s/log N), where s is the statistical security parameters.
Finally Yehuda talked about the SCAPI library for secure computation developed at Bar-Ilan University, and stated once again their long term commitment to this library.
Second Session: New Techniques and Models
Multi-party computation of polynomials and branching programs without simultaneous interaction
The talk considers the problem of the parties engaging in a multi-party computation in general needs to be online at the same time. This might not always be possible e.g. if the parties are in different time zones all across the world. Because of this the notion of one-pass secure computation was introduced in [HLP11], where each party communicates with a server once and when all parties have given their input to the server is releases the output of the computation. However, this leads to an inherent leakage problem: A server might collude with the last party (parties) supplying input and thus the server can compute several evaluations (on different inputs of the corrupted parties) of the function. Because of this it is not possible to compute all functions, e.g. PRFs.
Hoeteck then presents a protocol for one-pass secure computation for sparse multi variate polynomials and read-once branching programs. Examples of functions in these classes are low degree polynomials and finite automata. Specifically the presentation considers branching programs where the efficiency end up being O(width) exponentiations per party.
Efficiently secure three party computation
Alex is talking about how to achieve efficient and actively secure three party computation with dishonest majority, the motivation being that this specific case occurs often in practice, e.g. the Danish sugar beet auction and Sharemind (some objections were raised by the audience here, as it was argued that the choice of 3-parties in these examples was only an artifact due to the fact that with 3-parties it is possible to use efficient information-theoretic protocols based on secret sharing, instead of more expensive protocols based on computational assumptions, and not an intrinsic requirement of the application) .
The specific approach is to generalize two-party protocols instead of using a general MPC protocol. The overall idea of the protocol is to have the three parties emulate a cut-and-choose protocol where two parties will emulate the circuit constructor and the third party act as the evaluator. However, this cannot be done trivially as we require obliviousness of the keys in the garbled circuit, i.e. none of the two parties constructing the circuit is allowed to know the wire keys. This is achieved by using the distributed encryption scheme from [DI05]. Furthermore, some of the primitives from TinyOT [NNOB12] are needed in order to do the garbling. In the end the protocol ends up being constant round and is roughly twice the cost of an underlaying cut-and-choose of garbled circuits 2PC protocol.
Improved OT Extension for Transferring Short Secrets
Secure computation is moving fast from theory to practice, a major effort consists in improving efficiency and addressing implementation and “systems” issues. The state of the art (in the semi honest setting) includes both theoretical advances, such as protocols with, constant overhead, optimal comm./round complexity and ORAM-based SFE, and practical advances, such as Yao’s garbled circuits optimizations, GMW optimizations and Yao+GMW hybrids.
PKE is generally more expensive than SKE, hence basing protocols on SKE techniques makes them cheaper. However, we know that PKE primitives cannot be black-box reduced to SKE primitives. An alternative is extending public key primitives using secret-key primitives. In other words, using a small initial number of public key operations to obtain a large number of primitive instantiations that will only require secret-key operations later on.
OT is used both in Yao’s garbled circuits and in GMW, being of central importance to Secure Computation. It is known that OT cannot be based on symmetric-key primitives nor extended without additional operations. Nevertheless, it was shown that OT can be extended using symmetric-key primitives. In this scenario, a small number of initial seed OTs is used to obtain further instances of OT that will only require cheap symmetric-key operations.
While Beaver provided the first feasibility results for this scenario, Ishai et al. showed how to do it efficiently in the semi-honest case. In this construction, the parties need to exchange large matrices, resulting in a large communication cost (roughly 2nL bits for the main reduction and 2nk bits for length extension, for n instances of extended OT of length L obtained from seed OTs of length k).
In the result presented in this talk, they take a new view on the protocol by Ishai et al., specially on how to construct the matrices used in the original protocol. This approach involves building a coding theoretic framework for the Ishai et al. protocol, using efficient (high rate) codes to encode the matrices exchanged. The final construction achieves perfect security against a malicious sender and statistical security against a semihonest receiver. For concrete efficiency, Hadamard codes are used, yielding improvements in the order of approximately 2 or even 3.5 if further optimizations are employed. On the technical side, their framework can be realized on the random oracle model or using a new class of hash functions that are “code correlation robust”.
Secure computation can be based on many different techniques (Yao’s garbled circuits, FHE and GMW style compilers) obtaining different security guarantees (passive or active security). In the specific case of garbled circuits, active security can be obtained by cut-and-choose techniques on the circuit or gate levels. Basically, the party who garbles the circuits creates several copies of a circuit and reveals information about half of them, allowing the party who evaluates the circuits to check if they were correctly constructed.
The basic cut-and-choose approach has an overhead in the order of O(s). On the other hand, the MiniLEGO approach, which does a similar cut-and-choose operation over gates instead of circuits, achieves an overhead in the order of O(s / log|C|), which translates into better efficiency for large circuits. In this framework, the garbler commits to several garbled gates, half of which are later revealed to the evaluator, who can check if they were correctly constructed. The leftover gates are then put into “buckets” where it is guaranteed that a majority of the gates are correctly constructed. The buckets are then used as regular gates to evaluate a circuit, in a process called soldering, where they are put together to form the required circuit.
With this new approach, all current optimizations for Yao’s garbled circuits can be used. It is still interesting to obtain cheaper XOR-homomorphic commitments and improve the complexity constants, though. The main ingredients to improve this kind of technique are: free XORs and XOR homomorphic commitments. The free XOR techniques allows for parties to compute XOR gates locally and very cheaply, while XOR homomorphic commitments allow the receiver to verify XOR relations between values inside unopened commitments.
In the soldering process, gates are put into buckets that will act as regular gates. For that to work, all the gates in an individual bucket should get the same inputs so that the majority of their outputs is considered the final output of the bucket. In that process, the free XOR technique and the XOR homomorphic commitments are used to compute the evaluations keys of individual gates and ensure that maliciously constructed gates are detected. Later on, the right inputs keys are obtained through OT. The efficiency of the commitment scheme directly affects the final efficiency of the scheme. Hence, an efficient XOR homomorphic commitment scheme is constructed from OT and error correcting codes, which can be done fairly efficiently using OT extension.