Efficient Secure Computation at TCC 2013

At TCC 2013, the topic of efficient secure computation was widely discussed; I’ll mention just two of the talks. Tal Malkin gave an invited talk on secure computation on big data (where she means really big). I’ll let her blog about her talk, but will just say that she gave some very interesting insights on the problem. In addition, Benny Applebaum presented a really nice result about how to achieve free-XOR under standard assumptions. Of course, his encryption scheme is not as efficient as AES and so this isn’t about implementations, but it’s a really nice interplay of theory and practice. In addition, he presents a more natural security requirement for free XOR that is based on a combination of security in the presence of related-key and related-message attacks. This builds on work at last year’s TCC about the security of the free-XOR technique by Choi et al. However, the notion of security presented this year is arguably more natural than “2-circular correlation robustness”.

At the rump session, there were four talks that I’ll mention.

Rafi Ostrovsky described a new paper on secure two-party computation via garbling RAM programs (to appear at Eurocrypt 2013). The paper has some very nice theoretical upper bounds. Beyond this, the basic construction for a single execution uses only technology from the 90s: this was Rafi’s polite way of saying “no fully homomorphic encryption, no Bilinear maps and no lattices”. I guess that young cryptographers would wonder that you can actually do something new based on boring-old DDH, but here you have it! Anyway, although this approach doesn’t yet seem to compete with existing highly optimized protocols, it certainly looks like it has great potential.

Next, Claudio Orlandi introduced the MPClounge, which you already know about if you are reading this, so I won’t elaborate.

After Claudio, Ivan Damgard described a new application of secure computation to the domain of mitigating server breach for the case of one-time password authentication. The basic idea is as follows. One-time password authentication schemes work by parties having devices that compute a new short 6-8 digit password every time that they authenticate. This helps alleviate the problem of users choosing bad passwords. Now, these devices contain a cryptographic key and compute the new password by applying a function like AES or HMAC to the time or some other transient value. In order to verify the one-time password, a server has to compute the cryptographic function itself, derive the password and compare. The problem with this system is that if a server breach occurs, then all of the cryptographic keys can be stolen. In such a case, all user devices have to be replaced, which is extremely expensive (most of these devices cannot even be reprogrammed with a new key). This exact scenario happened to RSA, and Lockheed-Martin reported attacks on their systems that can be traced back to the server breach at RSA (note that devices were not replaced after this breach, probably because it wasn’t clear exactly what was stolen and the cost would be too great); search for “Lockheed-Martin RSA” for more info. Now, it is possible to mitigate the danger of such a server breach by splitting the server into two and giving each server a share of the cryptographic key for computing the one-time passwords. Then, one-time passwords can be verified by running a SECURE COMPUTATION to compute AES, HMAC or whatever algorithm is needed. This forces the attacker to break into both servers which is much harder (in order to ensure that it’s much harder, they should be given different protection mechanisms and/or be at different locations; in addition, the shares should be refreshed periodically so that an attacker has to break into both simultaneously). Note that RSA provides such a solution for static passwords but not for one-time passwords because of the cost of secure computation. However, we already have fast enough protocols to do this and Ivan demoed code that can do about 10 AES computations in about 10 milliseconds, so there is no delay in this authentication verification. The system currently implemented is based on the SPDZ protocol ([DPSZ] at Crypto 2012) which is secure in the presence of malicious adversaries and uses preprocessing. This is actually a joint project between Ivan, Nigel, Jesper and myself, and we hope to actually make it commercial (the solution has been patented by Bar-Ilan).

Finally, I gave two talks at the rump session. First, I presented SCAPI (http://crypto.biu.ac.il/scapi) which is a library that we are building at Bar-Ilan that is designed for secure computation implementations. The library is meant to be a general infrastructure and is not aimed at any specific protocol. It is open source and is written in Java. However, high efficiency is obtained via JNI. This means that you can get the incredible speed of Miracl elliptic curve operations (e.g., 0.5 ms for an exponentiation) while writing high-level Java code that is at the abstract level that cryptographic protocol designers like to work at. We hope that SCAPI will be of use to the community (more information can be found on the website).

Next, I presented a new result that I recently posted on ePrint for achieving malicious security via garbled circuits. In this new protocol, we achieve an error of $2^{-s}$ with just $s$ circuits (plus some additional overhead). This means that 40 circuits alone suffice in order to obtain $2^{-40}$ security; improving on the previous best of 125. More details can be found at http://eprint.iacr.org/2013/079. Note that another new protocol was posted http://eprint.iacr.org/2013/081, but I won’t comment on it and will let you read and compare. (It is an interesting and exciting time in this field.)

Finally, I’ll close by commenting that while in Japan I gave an invited talk on efficient two-party computation at PKC. This shows the interest this topic is getting, which I am really happy about. You can see my slides on the PKC 2013 website (http://ohta-lab.jp/pkc2013/slides/Lindell.pdf).

ctic_news

Advertisements

Crypto is Dead; Long Live Crypto!

At last week’s RSA conference, in the Cryptographer’s Panel, Adi Shamir said that we need to prepare for a post-crypto world. The same basic idea was put forward by Jonathan Zittrain in his invited talk at last years CRYPTO conference, which was entitled “The End of Crypto”. The basic idea behind both statements is that in the modern world the attacker is already inside the box, so no matter how much encryption you use the bad guy will always get your data.

Let us elaborate on this idea a bit more. In the consumer field you may use your smart phone to store all sorts of important data on it (email addresses, passwords, banking data etc), but most users also download apps. Very few of these apps have had any form of code or security review, and so could contain malicious code which can compromise your phone, and your security.

In the enterprise space the problem is even more acute. The existance of APTs (Advanced Persistant Threats) means that enterprises need to assume that the attacker is already within their perimeter. This has always been true, in that often attacks come from the inside, but now we have seen external attacks which are based on dorment code, lying in wait, within an enterprises security perimeter.

The usual cryptographic model is that there is at least one honest person in a protocol. This is still true, the user of a system may be honest, but if we cannot trust the system they are on to not be compromised, we can essentially not trust anything. What is the point of encrypting data if the attacker already has the key? Even if the data has been encrypted successfully, at some point it needs to be decrypted so as to process it. At the point of decryption, if the attacker controls the machine, he also controls the decrypted data. So with such all pervasive threats, how can crypto be of any use what-so-ever? In some sense this means “Crypto is Dead!”.

Luckily, however, cryptography is about more than just encryption and authentication. Modern cryptography gives us a number of tools which we can use to secure systems. In Bristol we have been working for a number of years in turning a theoretical technology called multi-party computation (MPC) into a practical reality. The basic idea behind MPC is the following: Suppose a set of n parties have their own secret inputs, for example party i has input xi. Now suppose they want to compute some function on these inputs say

f(x1,…,xn)

MPC allows us to do this, via a protocol, in such a way that the parties learn the output of the function, but no one learns anything about the other parties inputs. Traditional use-cases which have been proposed for MPC are the sharing of databases etc.

However, MPC also allows one to solve the problem of what hardware to trust in an enterprise environment such as that described above. In other words we can use MPC as a threat mitigation strategy to avoid APTs and other malware on our systems. We do this by altering the standard use case of MPC, instead of having multiple parties we have one party who just happens to be using multiple computers.

Suppose we expect that an attacker is inside our network. If we put in place enough protection to ensure he is not on all of our computers at once, then we may be able to hold a secret key on that uncompromised computer. But we do not know which computer is uncompromised. Using MPC we can get around this. Consider the simplified situation where we have two computers, where we know it is likely one is compromised and the other is not. We now “split” the encryption key between the two servers, for example via a one-time pad. So each party essentially has a random string, but the two parties together when combining their random string have the secret key. We treat the two random strings as the inputs x1 and x2 above, and now consider f as the encryption function. In this way the two servers can perform the encryption, without anybody ever knowing the underlying secret key itself.

We could use the same idea for messages. Instead of having one server needing to decrypt some ciphertext, with the obvious problem of what happens when the server is compromised, a combination of servers could use MPC to decrypt the ciphertext and then use MPC to process the plaintext. In this way no single party ever sees the underlying key, and the underlying plaintexts, at all.

This might seem like pie in the sky, but we can actually perform such computations today, using protocols developed in Bristol and at other places around the world. For example when we first implemented the AES functionality in a covertly secure manner back in 2009 we could evaluate a single AES block encryption between two partes in 60 seconds. Now this can take as little at 0.012 seconds, and we can process around 1000 such encryptions per second. With these latencies and throughputs we can actually implement solutions which help mitigate against the problem of servers being compromised by APTs.

Of course MPC only acts as a threat mitigation defence in such situations. If all machines are compromised, then all bets are off. But that is the same as with all security mechanisms, one needs to combine them together to provide a secure system. But the key message is that with MPC one no longer needs to place ones secrets all in one place. So whilst the traditional cryptographic model may be dead in the modern world, the application of cryptography to address security problems is far from dead. So Long Live Crypto!

This post originaly appeared on the BristolCrypto blog.