This week I attended TCC 2014 at UCSD. This is a very limited report of what happened there. Many other things would be worth writing (some of these can be found on the Bristol Crypto Group, such as a very interesting talk about fairness in secure computation by Gilad Asharov), but my (and your) attention span are limited.
The first session of TCC was dedicated to program obfuscation. Clearly being at TCC, this is about cryptographic obfuscation, and not the kind of code obfuscation you can see in the picture below. Cryptographers started dreaming about the ability of obfuscating program since the invention of public key cryptography, as this has countless cryptographic applications.
If you are here you are probably interested in secure computation, so you can think of obfuscation as non-interactive private function evaluation: Alice obfuscates a Boolean circuit implementing some function f and gives it to Bob, who should now be able to evaluate f on any input of his choice x, without learning nothing more about f than what is leaked by the output f(x). This is what we call virtual black box obfuscation (VBB).
Virtual Black-Box Obfuscation
In the first talk of TCC Zvika Brakerski presented a candiate VBB obfsucator for all circuits. This is odd, since Barak et al. showed in 2001 that there exist no VBB obfuscator for all circuits. However there is no contradiction here, since Zvika’s result hold only in an idealized model where the adversary only has “oracle access” to the underlying cryptographic primitive, and the impossibility result does not hold in this model. There are two ways of looking at this result: if you like your glass half empty, this result shows a limitation of results in the “generic group model”, whereas if you like your glass half full, you can think that Zvika’s result is a step forward in understanding how to VBB-obfuscate everything that can be VBB-obfuscated.
Weaker Notions of Obfuscation
The session had two more talks on obfuscation. Marcel wrote about the second one on the Bristol Crypto Group Blog, so I will skip directly to the third one by Elette Boyle, on extractability obfuscation. Since Barak et. al showed that VBB obfuscation is impossible, it makes sense to study other (weaker) security requirements for cryptographic obfuscation. The weakest of them all is called indistinguishability obfuscation (iO): this notion only guarantees that it should be hard for an adversary to distinguish between the obfuscations of two circuits C1, C2, if C1 and C2 output the same values on every input. At a first look, this might seem a silly definition of security: if the two programs output the same value on the same inputs, what is left to be hidden? In fact, if P=NP one could trivially construct an obfuscator satisfying this notion by outputting “the first” (in some lexicographic order) circuit that is consistent with C1 and C2 on every input. However, it turns out that iO can be very useful in cryptographic applications. My intuition for this is that sometimes it is hard to know what function a given circuit is implementing. Here is a simple example: let PRG be a cryptographic pseudorandom generator that expands a k-bit seed into a 2k-bit long string, and let C_p be a circuit parametrized by a string p that on input s outputs 1 iff p=PRG(s). Now, does this circuit ever output 1? Deciding if this circuit ever outputs 1 is equivalent to decide whether is in the support of the PRG or not, thus breaking the PRG. Therefore, one can construct a circuit C1 that outputs 1 only a given input s that is indistinguishable to a circuit C2 that always outputs 0 using only iO obfuscation and PRGs (this is almost literally what we did with Antonio this paper on circular security).
After this long introduction, I can tell you about the new notion of obfuscation that Elette presented, called extractability obfuscation (eO), that guarantees that the only way of distinguishing an obfuscation of C1 and an obfuscation of C2, is by finding an input x where C1(x) is different from C2(x). This notion can be seen as a natural relaxation of iO in the following sense: in iO there exist no inputs on which the two circuits have different outputs, while in eO the only way that the adversary can distinguish is by finding an input on which the circuits have different output. Moreover, eO is weaker than VBB-O and therefore can be potentially achieved under weaker assumptions. Elette finally showed some applications of eO, where it is not clear if one could achieve the same result using iO.
MPC from Obfuscation
This is the MPC Lounge, so let’s talk about MPC. Shai Halevi showed how to use obfuscation to construct 2 round MPC protocols. It is clear that one cannot do MPC in one round: the best one can do in one round is to hardwire one’s input in the circuit, obfuscate it and then give it away. Say we want to compute a function f and my input is a, now I can obfuscate g(x) = f(a, x) and give it to you. However now you can evaluate g(x) multiple times on different inputs, and this clearly leaks more information about a then a single execution of the MPC protocol. The next best things is two rounds and Shai showed how to do it using a clever combination of FHE, obfuscation and NIZKs. The main idea is that the first round commits every party to their inputs and randomness, and then one can obfuscate the next-message function of any multiround MPC protocol. Clearly this protocol is far from being useful in practice, but it is a very interesting feasibility result.
Invited talk: Silvio Micali
The 2012 Turing-award winner Silvio Micali delivered the first invited talk. Silvio, who laid the foundations of modern cryptography by formally defining security for encryption schemes (semantic-security) and cryptographic protocols (zero-knowledge and the simulation paradigm), is now interested in the field of mechanism design. I find mechanism design to be a very interesting field, and it is in some sense very related to MPC, as many noticed before me. The first slide of every MPC talk shows a number of parties delivering their inputs to a trusted-party, and then they show how to replace the trusted third party with a cryptographic protocols. But we (almost) never worry about which functions the parties are computing, and what inputs the party use! In mechanism design we study how to construct algorithms (mechanisms) that “force” parties to be behave nicely, in the sense that each party has no incentive in deviating from the protocol rules. The main example of this are auctions, and Silvio started by a very enthusiastic explanation of first vs. second-price auction, and how in second price-auction (assuming that participants are rationals), everyone bids exactly their true valuation of the items.
However, “once a cryptographer, always a cryptographer” as Silvio puts it: traditionally mechanism design assumes that parties are rational and do not collude and that, moreover, they do not care about privacy. Therefore Silvio thinks that we (cryptographers) should get involved in this field. I was extremely happy to hear this, as I already started thinking about the connection between privacy and mechanism design.
Sivio showed with a great example why collusion is a problem. Suppose that the tax authority uses the following mechanism: the tax authority introduces a rule where, if A says that B paid his taxes, B is not going to be audited. This is desirable, as the tax authority spends money to audit citizens, so if they can avoid auditing B they save money that can be used to build schools/hospitals/etc.. Also, this mechanism is rational: from A point of view, it would be irrational to say that B paid his taxes if he did not. It is in A’s interest that B pays his taxes (as a small fraction of B’s taxes pays goes into services that A uses). However, anyone can see that if this was implemented in the real world, then A and B would just collude: A says B paid all his taxes, B says that A paid his taxes, and no one pays taxes anymore. Silvio went on showing some recent mechanisms he has been working on, that leverage on other parties knowledge and are resilient to collusion (while offering some degrees of privacy).
Invited Talk: Russel Impagliazzo
Russel Impagliazzo was the second invited speaker. Russel gave an amazing talks that made us reflect and think about deep, fundamental questions about what we are doing and where are we going as a scientific field. Russel talked about general versus specific hardness assumptions in cryptography. This is a very actual topic, as in the last few years plenty of novel cryptographic assumptions have been introduced, to allow to implement functionalities that were previously thought impossible such as fully-homomorphic encryption, multilinear maps and software obfuscation.
One of his first slide read: “cryptographers say problems are hard unless there’s a reason they are easy, while heuristic designer believe problems are easy unless there is a good reason they are hard”. However, Russel warns us against this, as history shows that some plausible assumptions have been broken, and that it is often unclear how much cryptanalitic effort has gone into trying to break the assumptions we make up. This does not mean that one should not dream big — according to Russel, even if at the end we find out that obfuscation is indeed impossible and all candidate multilinear maps are broken, this research line is still important, as it would still deepen our understanding of computation.
Russel sees both generic assumptions (one-way function, trapdoor permutations, …) and specific assumptions (RSA, discrete logarithm, …) as important. Looking at specific assumptions allow to think about coincidence and functionalities (e.g., RSA is malleable, let’s compute on encrypted data!) while generic assumptions is important both conceptually and pragmatically (oh no, RSA is broken! No worries, just plug in a different one-way function!)
Russel also makes us think about how we evaluate generic assumptions for plausibility: are they minimal (aka we do not have a choice, we need one way function if we want symmetric cryptography)? are there many examples believed secure? Are they different looking? And, if the assumption fails, what happens? Would the world be too good to be true? Russel (et al.) showed that if one way functions do not exist, then average case generic learning is possible. Can we do the same for other assumptions? If there are no protocols for key agreement (in a strong sense), can we then extract all the hidden meaning in any conversation? If obfuscation does not exist (in a strong sense), can we understand something more about circuits (SAT, lower bounds)?
Another very interesting question that Russel asked is: where does the assumption end? Is there a ultimate cryptographic function that allows for all functionalities? Or, as Russel puts it, a “cosmic cube” for cryptography? Or are there properties that are in conflict, so that we will always need more than one assumption?