At TCC 2013, the topic of efficient secure computation was widely discussed; I’ll mention just two of the talks. Tal Malkin gave an invited talk on secure computation on big data (where she means really big). I’ll let her blog about her talk, but will just say that she gave some very interesting insights on the problem. In addition, Benny Applebaum presented a really nice result about how to achieve free-XOR under standard assumptions. Of course, his encryption scheme is not as efficient as AES and so this isn’t about implementations, but it’s a really nice interplay of theory and practice. In addition, he presents a more natural security requirement for free XOR that is based on a combination of security in the presence of related-key and related-message attacks. This builds on work at last year’s TCC about the security of the free-XOR technique by Choi et al. However, the notion of security presented this year is arguably more natural than “2-circular correlation robustness”.

At the rump session, there were four talks that I’ll mention.

Rafi Ostrovsky described a new paper on secure two-party computation via garbling RAM programs (to appear at Eurocrypt 2013). The paper has some very nice theoretical upper bounds. Beyond this, the basic construction for a single execution uses only technology from the 90s: this was Rafi’s polite way of saying “no fully homomorphic encryption, no Bilinear maps and no lattices”. I guess that young cryptographers would wonder that you can actually do something new based on boring-old DDH, but here you have it! Anyway, although this approach doesn’t yet seem to compete with existing highly optimized protocols, it certainly looks like it has great potential.

Next, Claudio Orlandi introduced the MPClounge, which you already know about if you are reading this, so I won’t elaborate.

After Claudio, Ivan Damgard described a new application of secure computation to the domain of mitigating server breach for the case of one-time password authentication. The basic idea is as follows. One-time password authentication schemes work by parties having devices that compute a new short 6-8 digit password every time that they authenticate. This helps alleviate the problem of users choosing bad passwords. Now, these devices contain a cryptographic key and compute the new password by applying a function like AES or HMAC to the time or some other transient value. In order to verify the one-time password, a server has to compute the cryptographic function itself, derive the password and compare. The problem with this system is that if a server breach occurs, then all of the cryptographic keys can be stolen. In such a case, all user devices have to be replaced, which is extremely expensive (most of these devices cannot even be reprogrammed with a new key). This exact scenario happened to RSA, and Lockheed-Martin reported attacks on their systems that can be traced back to the server breach at RSA (note that devices were not replaced after this breach, probably because it wasn’t clear exactly what was stolen and the cost would be too great); search for “Lockheed-Martin RSA” for more info. Now, it is possible to mitigate the danger of such a server breach by splitting the server into two and giving each server a share of the cryptographic key for computing the one-time passwords. Then, one-time passwords can be verified by running a SECURE COMPUTATION to compute AES, HMAC or whatever algorithm is needed. This forces the attacker to break into both servers which is much harder (in order to ensure that it’s much harder, they should be given different protection mechanisms and/or be at different locations; in addition, the shares should be refreshed periodically so that an attacker has to break into both simultaneously). Note that RSA provides such a solution for static passwords but not for one-time passwords because of the cost of secure computation. However, we already have fast enough protocols to do this and Ivan demoed code that can do about 10 AES computations in about 10 milliseconds, so there is no delay in this authentication verification. The system currently implemented is based on the SPDZ protocol ([DPSZ] at Crypto 2012) which is secure in the presence of malicious adversaries and uses preprocessing. This is actually a joint project between Ivan, Nigel, Jesper and myself, and we hope to actually make it commercial (the solution has been patented by Bar-Ilan).

Finally, I gave two talks at the rump session. First, I presented SCAPI (http://crypto.biu.ac.il/scapi) which is a library that we are building at Bar-Ilan that is designed for secure computation implementations. The library is meant to be a general infrastructure and is not aimed at any specific protocol. It is open source and is written in Java. However, high efficiency is obtained via JNI. This means that you can get the incredible speed of Miracl elliptic curve operations (e.g., 0.5 ms for an exponentiation) while writing high-level Java code that is at the abstract level that cryptographic protocol designers like to work at. We hope that SCAPI will be of use to the community (more information can be found on the website).

Next, I presented a new result that I recently posted on ePrint for achieving malicious security via garbled circuits. In this new protocol, we achieve an error of $2^{-s}$ with just $s$ circuits (plus some additional overhead). This means that 40 circuits alone suffice in order to obtain $2^{-40}$ security; improving on the previous best of 125. More details can be found at http://eprint.iacr.org/2013/079. Note that another new protocol was posted http://eprint.iacr.org/2013/081, but I won’t comment on it and will let you read and compare. (It is an interesting and exciting time in this field.)

Finally, I’ll close by commenting that while in Japan I gave an invited talk on efficient two-party computation at PKC. This shows the interest this topic is getting, which I am really happy about. You can see my slides on the PKC 2013 website (http://ohta-lab.jp/pkc2013/slides/Lindell.pdf).