CRYPTO 2014 rump session

The rump session at CRYPTO had plenty of talks in many subfields of cryptology, and secure computation was no exception. The part of the rump session concerned with MPC was started by Xiao Shaun Wang discussing “Circuit ORAM and on the Tightness of the Goldreich-Ostrovsky ORAM Lower Bound”. In particular he discussed an implementation (co-authored with T-H. Hubert Chan and Elaine Shi) of circuit ORAM, based on the JustGarble framework. This approach yields great improvements (up to a factor 48x on 1 GB data) over implementations based on path ORAM.
Xiao’s talk was followed by Kartik Nayak discussing “Oblivious data structures”. Kartik considers the problem of data access overhead using an ORAM. Specifically the overhead is data transferred in the oblivious case divided by data transferred in non-oblivious case. The best known result is O(log^2 (N)/ log log (N)) overhead [KLO’12]. Kartik and his co-authors (Xiao Shaun Wang, Chang Liu, T-H. Hubert Chan, Elaine Shi, Emil Stefanov, Yan Huang) manages to improve on this result for restricted access patterns. Such patterns include proximity and bounded degree trees. Such patterns reflect how data is generally accessed in most computer programs and algorithms. An open source implementation using garbled circuits as a backend is currently being implemented by the authors.
Xiao Shaun Wang returns to the stage to continue the talk on ORAM by presenting “SCVM: An Efficient, Automated RAM Model Secure Computation Framework”. The goal of SCVM is to make it possible for non-expert programmers to program secure computation tasks in a matter of hours. The SCVM system ensures security through a type system and is supposed to offer competitive efficiency to customized circuits for a large class of functions. More specifically, the system is as follows: the programmer makes a program in a source language which is then passed to a frontend compiler. The output of the compiler is then a SCVM intermediate representation which is then passed to a backend compiler. The compilers implement several compile time optimizations and the whole system is going to yield support for rich library functions such as data structures, machine learning and graph algorithms. Like in the previous presentations, the SCVM is based on ORAM using garbled circuits as a backend.
Next up was Wutichai Chongchitmate describing how to do “Optimally Resilient and Adaptively Secure MPC with Low Communication Locality”. The project (joint work with Nishanth Chandran, Juan A. Garay, Shafi Goldwasser, Rafail Ostrovsky, Vassilis Zikas) considers how to do MPC with many, many parties while retaining low communication complexity. Chongchitmate explains that this is possible to achieve, even in the case of a dishonest majority, controlled by an adaptive adversary. The core idea is to encode communication patterns into a symmetric key infrastructure, in particular having every party use his symmetric keys to decide on his set of neighbors. Then expander graphs are used to show that it is infeasible for an adaptive adversary to discover these patterns and disconnect two honest parties.
Afterwards Tore Frederiksen and Roberto Trifiletti discussed their work in progress (co-authored with Thomas Jakobsen and Jesper Nielsen) on practical optimizations of the LEGO paradigm for secure computation. The idea of Lego is to use garbled gates to get maliciously secure two-party computation, but instead of doing cut-and-choose on entire garbled circuits, cut-and-choose is done on individual gates. Non-checked gates are then combined to construct a single fault-tolerant garbled circuit. However, putting individual gates together is expensive so Tore and Roberto highlighted some new approaches for doing this, yielding much smaller constants than given in the previous Lego works.
The next speaker was Mike Rosulek who described a new webpage project for summaries of papers in MPC. In particular the summaries are short and easily readable and in particular aimed for first year Ph.D. students and other people with basic knowledge about crypto and an interest in MPC. Currently the site contains 30 paper and a glossary, but the hope is to keep the site community driven. If you want a look or submit a summery then navigate your browser to http://tinyurl.com/mpc-annotated.
The summaries above include the most MPC relevant talks, however, the rump session contained many other interesting talks with applications to MPC. If you wish to have a look at the titles and slides then navigate to http://crypto.2014.rump.cr.yp.to.

Advertisements

MPC Workshop(s) in 2014

It looks like 2014 will be an interesting year for MPC. Here is a list of interesting events, in chronological order:

The third edition of this events with focus on practical aspects of Cryptography will take place in New York on January 13-15. Unfortunately registrations have been closed but we still want to mention that MPC will be present at the workshop with the “Practical Multi-Party Computation” session on Monday afternoon.

We already blogged about this exciting event organized by Seny Kamara and Payman Mohassel, so here is just a little update: the list of talks accepted to the workshop is now available online. The event is free and registration is still possible.

Following the success of the 2012 edition, this year’s workshop will promote exchange between research on the theory and practice of MPC. Registration is free.

Efficient Secure Computation at Eurocrypt 2013

There were multiple presentations about secure multi-party computation at Eurocrypt 2013. I’ll describe here only a few of these talks.

Tore Frederiksen presented work on “MiniLEGO: Efficient Secure Two-Party Computation From General Assumptions”, coauthored with Thomas P. Jakobsen, Jesper Buus Nielsen, Peter Sebastian Nordholt and Claudio Orlandi, all from Aarhus University, Denmark. This work builds on the LEGO paper of Nielsen and Orlandi from TCC 2009. That work presented a secure two-party protocol that applied the cut-and choose technique at the gate level, where one party generates many gates and the other party checks some of the gates and uses the others. A main technical challenge is how to connect together the gates that are chosen to be used. The original LEGO paper solved this problem by relying on a specific number-theoretic assumption and using public-key operations per gate. The new paper connects gates using a new XOR-homomorphic commitment scheme based on linear error correcting codes and oblivious transfer. The entire construction is therefore based on symmetric primitives, except for the few seed OTs needed to bootstrap the OT extension.

Saeed Sadeghian presented his work with Payman Mohassel on “How to Hide Circuits in MPC: An Efficient Framework for Private Function Evaluation”. The goal of this work is to obtain Private Function Evaluation (PFE), where privacy here means that the computation hides the function that is computed (in the sense that both the wiring of the circuit and the gates are kept hidden). Valiant have shown that every circuit with |C| gates can be computed by a universal circuit of size O(|C| log|C|). PFE can also be based on a fully homomorphic encryption scheme. Katz and Malka designed a two-party PFE protocol based on a singly homomorphic encryption, that uses O(|C|) public-key operations.
The new work presents a general framework in the semi-honest (passive) adversary setting. The results are for 2PC and MPC, and for both binary and arithmetic circuits. A major tool that is used is oblivious switching networks, that are based on the use of OT (which in turn can be extended and use mostly symmetric key operations). Private switching based on this tool is then applied to the Yao, GMW and CDN01 protocols.

Hoeteck Wee presented joint work with Dov Gordon, Tal Malkin and Mike Rosulek on “Multi-Party Computation of Polynomials and Branching Programs without Simultaneous Interaction”. The paper extends the results of  Halevi et al. on secure one-pass protocols, that introduced this type of protocols and showed how to implement them for computing  symmetric functions and choice functions (as well as presenting a generic but less practical protocol). The current work presents one-pass protocols for computing sparse multivariate polynomials, which can be used for computing statistic functions such as the variance, and for computing read-once branching programs, which can used for computing functions such as string matching, finite automata, and second price auctions. A major limiting factor of the new construction is that, unlike the previous work, the order of the participants needs to be known in advance, or computed on the fly.

Steve Lu presented his work with Rafi Ostrovsky titled “How to Garble RAM Programs”. This truly interesting work presents a secure two-party protocol where the two parties mimic a computation on a RAM machine, without converting the RAM programs into circuits. The protocol that is run is non-interactive. Each access to the RAM is implemented using oblivious RAM (ORAM), where the ORAM needs to access locations that depend on the index of the looked item and are unknown at “compilation” time. In order to achieve this property without interaction, the protocol implements at Step j a circuit that constructs the circuit for Step j+1 and encodes in it the locations that will be accessed.

Minutes from Leiden

As some of you may know, a school and a workshop on Mathematics of Information-Theoretic Cryptography were held at the Lorentz Center in Leiden University this May. Organized by Ignacio Cascudo, Ronald Cramer, Venkatesan Guruswami, Yuval Ishai, Carles Padró and Chiaoping Xing, both events had several interesting talks covering a wide range of topics. In this blog post I will cover some of the highlights concerning information-theoretic tools and protocols for MPC that were presented during the workshop.
Multi-Linear Secret Sharing
Amos Beimel presented new results on Multi-Linear Secret sharing Schemes obtained in a joint work with Aner Ben-Efraim, Carles Padró and Ilya Tyomkin. Intuitively, such schemes can be viewed as a generalization of regular linear secret sharing schemes that works over secrets composed of an arbitrary number of field elements. In his talk, Amos showed that for every prime p multi-linear secret sharing schemes where the secret is composed of p field elements are more powerful than similar schemes where the secret is composed of less than p elements. This observation sheds light on the nature of such schemes and gives interesting insight on optimizing protocols and applications that use them as a building block. He also presented super-polynomial lower bounds on the share size of multi-linear secret sharing schemes, which were only known for linear schemes.
OT-based Secure Computation
Jesper Buus Nielsen described his new approach to secure two-party computation that appeared in Crypto 2012 in a joint work with Peter Sebastian Nordholt, Claudio Orlandi and Sai Sheshank Burra. He focused on the oblivious transfer (OT) extension protocol described in that paper that makes it possible to obtain many actively secure oblivious transfers for a pretty cheap price (O(1) evaluations of a hash function per OT). This protocol builds on the general approach to OT extension introduced by Yuval ishai et al. and yields more than a milion active secure OTs per second. Having explained his approach to secure two-party computation and its efficiency, Jesper discussed how having cheap access to such large quantities of OTs could help obtaining more efficient two-party computation. An interesting question posed by him is whether the IPS compiler could achieve the same (or even better than) efficiency of his Crypto’12 protocol with the right parameters and underlying codex.
Cryptographic Complexity
Manoj Prabhakaran gave a very nice overview of classic and recent results on “cryptographic complexity”. He defined the cryptographic complexity of a (multi-party) function as the amortized number of “crypto gates” needed to securely evaluate the function. This can be viewed a formalization of the long standing question of what cryptographic primitives can be used as building blocks to obtain other primitives and specific functions. In fact, as Manoj pointed out in his talk, the cryptographic complexity of a certain function does not depend on the specific crypto gate used as building block as long as it is complete (for example, Oblivious Transfer). He went over recent results and state of the art techniques to obtain lower bounds on cryptographic complexity, touching different areas such as secure function evaluation and cryptography based on noisy resources. Building on the his general exposition of the subject, Manoj identified showing functions with super-linear cryptographic complexity (or even proofs of existence of such functions) as a main open problem in this field.
Functional Secret Sharing
Yvo Desmedt called the audience’s attention to Functional Secret Sharing, which allows a qualified set of parties to obtain a function evaluated on the secret and nothing else. This result appeared in the paper Computing Functions Of Shared Secrets by Yvo himself in collaboration with Amos Beimel, Mike Burmester and Eyal Kushilevitz. Yvo pointed out the similarity between this overlooked technique and current results on functional encryption. The paper presents basically two approaches: evaluating the function over the secret through private channels and through public broadcast channels. The scheme originally proposed is based on Shamir’s secret sharing scheme, naturally limiting the result to threshold access structures. Yvo remarked that there are still many open problems concerning functional secret sharing, such as generalizing the existing results to general access structures, obtaining more efficient schemes, considering other (possibly weaker) security definition and obtaining better results on the communication complexity of such schemes.
Broadcast Efficient MPC
Juan Garay presented his paper on Broadcast-Efficient Secure Multiparty Computation, which is a joint work with Clint Givens and Rafail Ostrovsky. In this paper, the broadcast channel is considered as an expensive resource, as it cannot be simulated if more than a third of the parties is corrupted. The main goal of the paper is to obtain protocols for MPC that minimize the use of the broadcast channel (or “broadcast complexity). The main result obtained towards this goal is a concrete protocol that only accesses the broadcast channel three times during a preprocessing phase and then never uses it anymore. This model fits well in situations where the parties are allowed to come together for an initial preprocessing phase where they have temporary access to a broadcast channel. Moreover, the protocol has a constant number of rounds. In its core are new constructions of verifiable secret sharing and pseudosignatures.
Crypto from Noisy Channels
Suhas Diggavi introduced a new model of noisy channel communication where all parties have access to an “erasure broadcast channel” and investigated what cryptographic primitives and protocols could be built on it. In contrast to Wyner’s classical wiretap channel, the model introduced by Suhas provides a single broadcast channel shared by all parties (including the adversary) where part of the messages transmitted are lost (erased). Consequently, all the honest parties and the adversary have exactly the same view of the information exchanged in the channel, requiring different techniques to obtain private message transmission and other primitives. Suhas shows that it is necessary to assume a (public) noiseless feedback channel between parties in order to obtain interesting cryptographic applications. Concretely he constructs key agreement and private message transmission solely from the physical noise properties of the channel and the feedback channel. There are still many open problems in this model, such as obtaining oblivious transfer, bit commitment and studying the capacity of this kind of channel for such cryptographic primitives.

Efficient Secure Computation at TCC 2013

At TCC 2013, the topic of efficient secure computation was widely discussed; I’ll mention just two of the talks. Tal Malkin gave an invited talk on secure computation on big data (where she means really big). I’ll let her blog about her talk, but will just say that she gave some very interesting insights on the problem. In addition, Benny Applebaum presented a really nice result about how to achieve free-XOR under standard assumptions. Of course, his encryption scheme is not as efficient as AES and so this isn’t about implementations, but it’s a really nice interplay of theory and practice. In addition, he presents a more natural security requirement for free XOR that is based on a combination of security in the presence of related-key and related-message attacks. This builds on work at last year’s TCC about the security of the free-XOR technique by Choi et al. However, the notion of security presented this year is arguably more natural than “2-circular correlation robustness”.

At the rump session, there were four talks that I’ll mention.

Rafi Ostrovsky described a new paper on secure two-party computation via garbling RAM programs (to appear at Eurocrypt 2013). The paper has some very nice theoretical upper bounds. Beyond this, the basic construction for a single execution uses only technology from the 90s: this was Rafi’s polite way of saying “no fully homomorphic encryption, no Bilinear maps and no lattices”. I guess that young cryptographers would wonder that you can actually do something new based on boring-old DDH, but here you have it! Anyway, although this approach doesn’t yet seem to compete with existing highly optimized protocols, it certainly looks like it has great potential.

Next, Claudio Orlandi introduced the MPClounge, which you already know about if you are reading this, so I won’t elaborate.

After Claudio, Ivan Damgard described a new application of secure computation to the domain of mitigating server breach for the case of one-time password authentication. The basic idea is as follows. One-time password authentication schemes work by parties having devices that compute a new short 6-8 digit password every time that they authenticate. This helps alleviate the problem of users choosing bad passwords. Now, these devices contain a cryptographic key and compute the new password by applying a function like AES or HMAC to the time or some other transient value. In order to verify the one-time password, a server has to compute the cryptographic function itself, derive the password and compare. The problem with this system is that if a server breach occurs, then all of the cryptographic keys can be stolen. In such a case, all user devices have to be replaced, which is extremely expensive (most of these devices cannot even be reprogrammed with a new key). This exact scenario happened to RSA, and Lockheed-Martin reported attacks on their systems that can be traced back to the server breach at RSA (note that devices were not replaced after this breach, probably because it wasn’t clear exactly what was stolen and the cost would be too great); search for “Lockheed-Martin RSA” for more info. Now, it is possible to mitigate the danger of such a server breach by splitting the server into two and giving each server a share of the cryptographic key for computing the one-time passwords. Then, one-time passwords can be verified by running a SECURE COMPUTATION to compute AES, HMAC or whatever algorithm is needed. This forces the attacker to break into both servers which is much harder (in order to ensure that it’s much harder, they should be given different protection mechanisms and/or be at different locations; in addition, the shares should be refreshed periodically so that an attacker has to break into both simultaneously). Note that RSA provides such a solution for static passwords but not for one-time passwords because of the cost of secure computation. However, we already have fast enough protocols to do this and Ivan demoed code that can do about 10 AES computations in about 10 milliseconds, so there is no delay in this authentication verification. The system currently implemented is based on the SPDZ protocol ([DPSZ] at Crypto 2012) which is secure in the presence of malicious adversaries and uses preprocessing. This is actually a joint project between Ivan, Nigel, Jesper and myself, and we hope to actually make it commercial (the solution has been patented by Bar-Ilan).

Finally, I gave two talks at the rump session. First, I presented SCAPI (http://crypto.biu.ac.il/scapi) which is a library that we are building at Bar-Ilan that is designed for secure computation implementations. The library is meant to be a general infrastructure and is not aimed at any specific protocol. It is open source and is written in Java. However, high efficiency is obtained via JNI. This means that you can get the incredible speed of Miracl elliptic curve operations (e.g., 0.5 ms for an exponentiation) while writing high-level Java code that is at the abstract level that cryptographic protocol designers like to work at. We hope that SCAPI will be of use to the community (more information can be found on the website).

Next, I presented a new result that I recently posted on ePrint for achieving malicious security via garbled circuits. In this new protocol, we achieve an error of $2^{-s}$ with just $s$ circuits (plus some additional overhead). This means that 40 circuits alone suffice in order to obtain $2^{-40}$ security; improving on the previous best of 125. More details can be found at http://eprint.iacr.org/2013/079. Note that another new protocol was posted http://eprint.iacr.org/2013/081, but I won’t comment on it and will let you read and compare. (It is an interesting and exciting time in this field.)

Finally, I’ll close by commenting that while in Japan I gave an invited talk on efficient two-party computation at PKC. This shows the interest this topic is getting, which I am really happy about. You can see my slides on the PKC 2013 website (http://ohta-lab.jp/pkc2013/slides/Lindell.pdf).

ctic_news