This week a large part of the MPC community is meeting in Aarhus for a workshop on Theory and Practice of Secure Computation. The local Crypto Group is going to blog live(ish) about all the talks at the workshop. As we are blogging live, our posts might contain inaccuracies and errors. Please write us if you find any!
The Aarhus Crypto Group, hosting the event and taking care of this series of blog posts.
The abstract of the talks, as well as some of the slides are available here.
Binary Tree Oblivious RAM Framework and Applications in MPC
Elaine started her talk visibly touched by remembering her coauthor and friend Emil Stefanov who tragically passed away this year.
In the first part of the talk Elain talked about oblivious RAM (ORAM): in many applications the access pattern leak significant information: if one wants to do outsource genetic data, the access pattern is sensitive (as different part of genome encode different information). A different example (Islam et al., 2012) is how access pattern leads to to severe leakage in email searches.
Clearly, if the following piece of code is run
If(sensitive variable) then read x else read y;
then the access pattern leaks sensitive information. Oblivious RAM (ORAM) [G096] is the solution to this problem. According to Elaine the two main open problems in this area of research are
- Can we achieve an ORAM with logN bandwidth overehad and 1 client storage? (GO96 proved logn lower bound and the best known before this work was log2N/loglogN).
- The second question, can it be practical?
A promising approach to ORAM is the binary-tree ORAM framework, which is very simple and can be implemented in 15-30 lines of pseudo-code. The key observation in ORAM is that data should move around all the time: if not, if the user access the same data twice the server will learn this.
All existing constructions move data around using oblivious sort, which is the main bottleneck of existing approaches. In binary-tree ORAM this is avoided: The data is stored on the server side in a binary-tree, and each node is a bucket of size B. The client assigns each block x to a path (identified by a leaf l), and the invariant is that x will reside in some bucket somewhere on the path between the root and l (of course the client need to store this mapping now, but using recursion this will be put in the ORAM as well). To look up block x one looks at the position map and reads all the buckets on the path, and x is guaranteed to be there.
After the block is accessed, one has to write it back to a different path r (otherwise, this creates a linkage problem). So we pick a new leaf at random r and write x between the root to there. We cannot write it on the leaf, otherwise the server learns more than it should. The same reasoning apply for any node on the path to r, and it turns out that the only safe thing is to write it on the root node.
This is fine at first but soon the root node overflow. This is dealt by doing clever eviction, and pushing blocks towards the leaves subject to their invariants/paths. Random eviction works by choosing two random buckets at each level, doing a real eviction on one of them and a dummy one on the other one. (If there is nothing on the bucket two dummy evictions are perfomed). Obliviousness is trivial (every data access visitis a random path and two random buckets at each level) and recursion alllows to give O(1) client storage to store the position map on the extenral ram as well. Different ways of parametrizing this approach give rise to different results.
In the second part of the talk Elaine presented some applications of ORAM to secure computation for “big-data”. The main idea here is that naive RAM to circuit compilers incur for O(N) cost for each dynamic memory access. Gordon et al. demonstrated how ORAM can give sublinear MPC (in the input size) for repeated queries. Here Elaine argued that in fact ORAM can lead to significant advantages even for one-time tasks, and presented an architecture for automating RAM-based secure computation, by converting programs into secure computation protocol. Among the features, usability for non expert and efficiency using static analysis were mentioned.
Advances in Obfuscation
Amit Sahai talked about program obfuscation. Informally, an obfuscator O is a compiler which on input a program P outputs a new program O(P) that has the same functionality as P but it cannot be reversed-engineered. It was shown that general-purpose obfuscation [B+01], in which the source code of P reveals no more information than can be learned from oracle access to P, is impossible to achieve for general programs. More specifically, it was showed that there are contrived ‘self-eating programs’ for which black box obfuscation suffers from explicit attacks. Since then, very few techniques and ad-hoc methods for simple programs were proposed. Hence, from a theoretical point of view, the safest way forward is indistinguishability Obfuscation (iO).
Recently, the proposal of candidate multilinear maps (mmaps) from ideal lattices restarted the research on program obfuscation. Using graded encoding schemes a candidate iO for all programs was proposed by [GGHRSW13]. iO is a weaker notion which requires that given any two equivalent programs P0 or P1 of similar size, the obfuscations of P0 or P1 should be computational indistinguishable. This was the first non-trivial candidate in the literature for all programs.
Amit rasied the question of the confidence we have about obfuscation, and where does security come from. So, the talk was focused on the security of obfuscation. Two recent works were discussed:
- Generic security for obfuscation. [B+14]
- iO from the multilinear subgroup elimination assumption [GLSW14].
As for the first part, [B+14] is a variant of [GGHRSW13] considering a more restricted adversary based on stronger assumptions. More specifically, they prove that the [GGHRSW13] compiler is a virtual black box obfuscator in a generic multilinear map model. This work shows that there is a candidate obfuscator that cannot be broken by algebraic attacks, hence reducing the task of creating secure obfuscators in the plain model to obtaining sufficiently strong security guarantees on candidate instantiations of multilinear maps.
Regarding part two, the authors of [GLSW14], give a construction of general-purpose iO proven secure via a reduction to an instance-independent computational assumption over multilinear maps, the Multilinear Subgroup Elimination Assumption. Therefore, they base security on an assumption that does not directly provide obfuscated programs to the adversary.
The Multilinear Subgroup Elimination Assumption considers a multilinear map over a composite N with many large prime factors, out of which one is a “special” prime factor c, plus k “distinguised” prime factors a_1, a_2,…,a_k and polynomial other ones. The adversary gets level-1 encoding of (random) generators of each prime subgroup (expect c) and random element of order c(a_1, a_2,…,a_(i-1),a_(i+1),…,a_k). Finally the adversary is asked to distinguish between Level-1 encoding of:
- a random element T of order (a_1, a_2,…,a_k)
- a random element T of order c(a_1, a_2,…,a_k)
Recall, that the assumption doesn’t incorporate branching program matrices, stradding sets ( stradding sets allow decomposition of the adversary’s queries into queries that only depend on matrices corresponding to a single input) and circuits . Previous iO assumptions were ad-hoc and directly incorporating obfuscated programs or they were meta-assumptions ( assumptions on assumptions, e.g. all assumptions that satisfy X,Y,Z are true [PST13] ).
To sum up, the results presented by Amit help understanding where the security of obfuscation comes from: one could say that now it is less likely that there might be a [B+01]-style negative result hiding in the iO works.
As pointed out by Amit, the open problems are several such as to propose completely different obfuscation methods, avoid mmap-like functionality altogether or obtain greater efficiency and security from more standard assumptions like LWE.
Private Function Evaluation: A General Framework and Efficient Instantiations
Payman presented a general framework for private function evaluation (PFE). PFE is similar to secure function evaluation (SFE), but contrary to SFE where the parties agree on a publicly known function F to be evaluated and only the inputs are private, PFE requires also that F itself is kept private, say, only known by one of the involved parties.
Applications of PFE include cases where the function to be evaluated is either classified, proprietary (i.e., protecting intellectual properties), or where it may in other ways reveal vulnerabilities. In addition, Payman pointed out, PFE may sometimes help increasing security in cases where too much information is leaked from the output of an SFE evaluation.
PFE constructions follow immediately by combining universal circuits, such as Valiants construction from 1986, with standard SFE. As a consequence, Payman explained, all feasibility results from SFE carries over to PFE and the focus of current PFE research hence is on efficiency rather than feasibility.
Most existing PFE constructions, with the exception of a result of Kolesnikov and Schneider (2008), are indeed based on universal circuits, combined with general MPC such as Yao or GMW in the case of boolean circuits or HE-based MPC such as that of Cramer, Damgård and Nielsen (2001) for arithmetic circuits. The downside of these PFE solutions based on universal circuits is their asymptotic complexity which is super-linear in the size of the circuit. Furthermore, PFE based on universal circuits tend to be complicated to implement in practice.
The key idea of the PFE framework that Payman presented in this talk is based on the observation that universal circuits essentially carries out two distinct jobs when used for PFE:
- They hide the topology of the circuit itself, i.e. the specific wiring of the gates and
- They hide the function that each gate implements, i.e., whether a gate is an AND, XOR, NOT gate for a boolean circuit, etc.
With this insights, Payman and his colleagues were able to avoid resorting to universal circuits by instead splitting PFE into two separate ideal functionalities: One for circuit topology hiding (CTH) and another for private gate evaluation (PGE). As a consequence, their PFE framework becomes quite flexible, allowing each of the two functionalities, CTH and PGE, to be implemented by separate subprotocols in various ways. In particular, CTH can be realized either with any singly homomorphic encryption scheme, a general MPC or more specialized and efficient protocols. Depending on how the subprotocols for CTH and PGE are implemented, the properties of the resulting PFE vary, and Payman and his colleagues were able to achieve several results:
- By combining the GMW protocol with a realization of CTH using additively homomorphic encryption, the first general multiparty PFE for boolean circuits with complexity linear in the circuit size is obtained.
- By instead plugging in e.g. the protocol of Cramer, Damgård, and Nielsen (2001), the first general multiparty PFE for arithmetic circuits with linear complexity is obtained.
- In the boolean circuit two-party case, by using instead the well-known Yao protocol, a two-party PFE protocol with linear complexity is achieved that improves on the only earlier result of Kolesnikov and Schneider (2008) that was not based on universal circuits.
- In the above cases, by replacing the CTH implementation based on additively homomorphic encryption with one based on OTs, a PFE solution that despite its non-linear complexity has good concrete efficiency since OT-extension can be used.
Payman concluded by mentioning that, recently, their team managed to improve the protocols to also achieve linear complexity in the presence of a malicious adversary, and he pointed out some open problems such as how to increase security by also hiding the circuit size without using FHE, and how to improve the practicality of PFE by using only a linear amount of symmetric operations rather than public key operations.
Adaptive MPC from New Notions of Non-Malleability
Muthu presented a general approach for obtaining adaptive-UC-security. He started his talk reviewing his work in STOC‘09 with Lin and Pass [LPV09], who considered only static adversaries.
Typically to obtain UC-security, it is necessary to show simulability of the honest parties and a flavor on Non-Malleability.
Usually both properties are obtained from a trusted setup. In [LPV09], their main insight was that the non-malleability requirement could be decoupled from the simulation requirement to achieve UC-security. Surprisingly, it is still the case even when considering adaptive security.The main ingredient to obtain that is a commitment scheme that satisfies a strong definition of non-malleability. The commitment scheme needs to be concurrent equivocal and non-malleable w.r.t. opening at the same time. It means that both it is possible to equivocate and even if to the concurrent MiM adversary is given a commitment and the relative opening it is still hard for him to come up with a commitment that opens to a related value. Notice that the MiM-adversary has to send the “right” commitment *before* receiving the “left” opening, i.e. the knowledge of the opening doesn’t allow the MiM adversary to equivocate his already committed value. The non-malleability is defined via extractability, interestingly the extractor can use rewind techniques (otherwise the definition collapse to standard UC-Commitment).
The main theorem is that adaptive UC-Puzzle + concurrent equivocal Non-Malleable Commitment scheme + simulatable public-key encryption scheme implies adaptively UC-MPC.UC-secure puzzle captures the property that no adversary can successfully complete a puzzle (an NP-Problem) and also obtain a trapdoor, but a simulator exists that can generate (correctly distributed) puzzles together with trapdoors. UC-Puzzle represent a unified framework for UC-security that allows to define many different models. The main theorem implies then as corollary that adaptively UC-MPC is realizable under many different kind of models. The approach lead to all previous results on adaptive concurrent secure computation, both in relaxed models (e.g., quasi-polynomial time simulation), as well as trusted setup models.