This week a large part of the MPC community is meeting in Aarhus for a workshop on Theory and Practice of Secure Computation. The local Crypto Group is going to blog live(ish) about all the talks at the workshop. As we are blogging live, our posts might contain inaccuracies and errors. Please write us if you find any!
The abstract of the talks, as well as some of the slides are available here.
Structured Encryption and Leakage Suppression
Seny Kamara gave a talk about an approach to encrypt databases and perform queries on the stored data. As usual in such a setting, the goal is to minimize leakage from storage and queries, while keeping the size of the encrypted data, the search time and the round complexity low. He started with an introduction into Searchable Symmetric Encryption(SSE) and ORAM. SSE is basically a two-party protocol between a server and a client, which allows not just to store and search a database, but also add or remove data. Such an SSE normally leaks information about the search or access pattern. On the other hand, ORAM-based solutions store a RAM (and not a database) and hide the access pattern. As a caveat, ORAMs are not optimized for searches and hence such a search must be simulated using read queries (with an inherent overhead). The idea is now to start from Structured Encryption(SE) and derive an ORAM scheme from it.
SE is a generalization of SSE, where other data structures than just a database can be encrypted. Such that it can be queried privately, and the queries will leak information over time. In order to accomplish the task of encrypted database queries, one has to augment such an SE with a “restructure” algorithm (this is not part of the original definition of SE) to bound and/or suppress leakage (trivially, this restructure algorithm should have small space complexity for the client). This restructuring is done using a transformation on the encrypted data structure using an encrypted dictionary. While the access pattern to the data structure continuously leaks information (until a transformation occurs), one cannot leak from the dictionary. The transformation does then return a new instance of the encrypted data structure and thereby invalidates the old leakage.
Seny then presented a formal construction, and information on three different implementations of the SE which have different types of leakage. Those schemes, together with a transform, then yield different asymptotics for the search.
Size-Hiding Secure Computation: Revisiting the Ideal Model
In the second talk in the afternoon session, Melissa Chase talked about size-hiding secure computation. In all existing definitions and constructions of secure computation, the size of the parties’ inputs is revealed. In her joint work with Rafail Ostrovsky and Ivan Visconti, Melissa considers secure two-party computation (2PC) where the size of one party’s input is private and constructs schemes for general functionalities that are secure against malicious adversaries under standard assumptions.
Size hiding is easily achieved in the case where a bound on the length of A’s input is known in advance. In this case, the input can simply be padded to this maximal length. Size hiding where there is no a priori polynomial bound on the parties’ input sizes has been considered for some specific functionalities. Last year, Yehuda Lindell, Kobbi Nissim and Claudio Orlandi (LNO13) investigated size-hiding secure computation for general functionalities in the semi-honest model. They defined several classes of size hiding, of which Melissa considers class 1.c: A’s input size |a| is hidden, but the input size |b| of B is revealed to A; only B receives the output f(a, b), and the length of the output is fixed.
In this setting, there is a problem when considering malicious adversaries who corrupt party A: Since A can choose its input freely, it cannot be excluded that, even though A is PPT, |a| is superpolynomial, but A can still execute the protocol, e.g. by somehow forming a valid commitment to a. In this case, the PPT simulator S is faced with the impossible task of extracting this superpolynomially long input in order to simulate. To prevent this, one can use a proof of polynomial work (PPW), and require A to prove that its input has polynomial size in addition to committing to it. There is no known way to construct PPW under standard assumptions, and they are shown to be necessary for general size-hiding 2PC against malicious adversaries as defined in LNO13. Melissa presented an interesting way out of this dilemma by using a polynomial-size representation of the input a instead of working with a directly. This requires a change in the security definition.
The first observation was that we require three properties from the ideal world in a simulation-based definition. It should 1) provide clear functionality and security guarantees, 2) be unconditionally secure and 3) be efficient. The last point rules out superpolynomial simulators. The key point here is that the trusted party T does not require a, it only needs to be able to compute f(a, b) for any b. The ideal model is changed so that A no longer gives its input a to T, but a polynomial-size representation rep(a) that uniquely defines a, along with a circuit C that computes both f(a, .) as well as a statistically sound and PPT-verifiable proof that the computation was done correctly. The trusted party then runs C on input b to obtain y and a proof that y = f(a, b). It outputs y to B if the proof is valid, otherwise it throws a special error that can only occur in the ideal world. Now the simulator needs to come up with rep(a) and C, but both are guaranteed to be of polynomial size. Security is defined as usual: For all PPT attackers A there is a PPT simulator S such that S almost never causes T to output the special error and the real world is indistinguishable from the ideal world. A nice feature of this definition is its equivalence to the standard definition in the case that input sizes are public and polynomial and a can be extracted from its representation in time poly(|a|).
The paper also contains a construction to show that this new notion is achievable under standard assumptions, but this was skipped in the talk due to the lack of time resulting from the questions and the lively discussion during the presentation.
Wednesday’s last talk was given by Benny Applebaum under the title “arithmetic cryptography”. Benny started by pointing out that arithmetic circuits are much better suited for some tasks than arithmetic circuits, since boolean simulation can be expensive, non-modular and in some cases even infeasible. An arithmetic circuit parametrized by a field F takes as input a vector of field elements and provides in addition to the standard field operations zero comparison and choice of a random field element and the neutral elements. This allows solving linear equations, but not less-than comparison or operations on the bit representation of field elements, which is completely concealed to the point that no information on the length of the bit representation is available. While honest parties are modeled as arithmetic circuits, the adversary in this setting is non-arithmetic and allowed to choose the concrete field F that is used in the computation.
Previous work showed that several information theoretic primitives can be implemented in this model, such as one-time pads, one-time MACs, secret sharing and MPC over fields, and randomized encodings. Computational primitives were only known to be implementable in weaker models, where e. g. an arbitrary bit-representation of field elements is available or a special encoding scheme over F is given.
In this joint work with Jonathan Avron and Christina Brzuska, Benny found positive and negative results. Assuming pseudorandmness of noisy random linear codes, he shows that commitment, symmetric and public-key encryption, and arithmetic OT exist, which is sufficient for secure two-party computation. On the negative side, additively homomorphic encryption, arithmetic garbled circuits, and secure computation with “low” communication complexity are all impossible. This shows that the standard boolean model is different from the arithmetic model, which can explain some limitations of of previous constructions.
The rest of the talk consisted of the proof of an impossibility result in the arithmetic model.