This week a large part of the MPC community is meeting in Aarhus for a workshop on Theory and Practice of Secure Computation. The local Crypto Group is going to blog live(ish) about all the talks at the workshop. As we are blogging live, our posts might contain inaccuracies and errors. Please write us if you find any!

The abstract of the talks, as well as some of the slides are available here.

## Outsourcing RAM Computation

### Daniel Wichs

Daniel delivered a talk on some recent exciting works on RAM computations. This talk was mostly based on two of his recent papers: (1) Garbled RAM, revisited [Published at Eurocrypt 2014] (joint work with Gentry, Halevi, Lu, Ostrovsky and Raykova) and (2) Outsourcing Private RAM computation [Available online] (joint work with Gentry, Halevi and Raykova). In general there are two equivalent models of computation in computer science: (i) Circuit and (ii) Random Access Machine (RAM in short) and most of the previous works concentrated only on circuits. One importance of considering RAM separately is that the equivalent circuit suffers from enormous blow up in size.

In this talk the main goal was to enable a weak client to leverage resources of a powerful server to compute some function P(x) (viewed as a program P in this model) without revealing the input x. The efficiency requirements are as follows: (a) Client does much less work than computing P(x) and the server does not work much than it is required to compute P(x). Previously, this kind of requirements were achieved using Fully Homomorphic Encryption but one might notice that FHE is only applicable on circuits, not on RAMs. To achieve their goal they used “Reusable/One-time garbled RAM” which was one of the most innovative parts of the talk. A well-aware reader might recall that although garbled circuits were quite famous from late 80’s and being used in numerous cryptographic applications since then, this direction of work considered “garbled RAM” for the first time. The term “reusable” refers to the fact that a garbled RAM can compute on several inputs (garbled, of course) instead of only one input which is applicable to one-time Garbled RAM. In this talk, Daniel explained both the one-time Garbled RAM as well as the Reusable variant although for simplicity he focused more on the one-time RAM

Talking more detail about the one-time garbled RAM, one may consider by assuming that a RAM consists of memory and a CPU such that the memory is a large array of data whereas the CPU does all the computations accessing the memory in some specified manner. Basically, the CPU can be considered as a circuit. Essentially they garbled the circuit of the CPU and then used PRF to protect the data and stored the key of the PRF inside the garbled CPU. So the security of the garbled RAM can be established from the security of the above mentioned well-known primitives. However, eventually there were some circular arguments in the proof which can only be solved using ID-based Encryption. He also talked about another possible fix solely based on OWF, but then the overhead (of server computation) becomes fairly large.

In the second part, he talked about Reusable garbled RAM briefly which essentially uses Reusable garbled circuit to garble the CPU instead of one-time garbling. But, they could not prove using the standard security notion of reusable circuit. Rather they introduced a weaker notion which can be instantiated using obfuscation.

.

## Efficient Oblivious Transfer Extensions and Applications

### Thomas Schneider

Thomas gave a talk considering the practicality of oblivious transfers (OT). The talk had two parts: How efficiently can we realize OT? And why is this so important? He answers the first question by considering that it has previously been established that there cannot exist a black box reduction of OT to a one way function (if there is this would imply P!=NP), but that it is possible to “extend” a few OTs into many OTs using a one way function. This is what is called an OT extension. A way of doing this is given in [IKNP03]. Thomas et al. implemented this protocol and discovered that, contrary to what one might expect, the computational bottleneck of this protocol was not any kind of cryptographic operations, but rather transposing a binary matrix. Thomas goes on to describe some optimizations of the original protocol which yields a much more efficient implementation. In the end the bottleneck remaining turns out to be network communication.

For the second part of the talk Thomas tells us a little about why OT is so interesting. First of all, besides it being a cornerstone of

general secure computation and in fact complete for secure computation, it can also be used to yield very efficient special

purpose protocols by itself. One example being privacy preserving face recognition. This computation can be done by comparing the hamming distance of a new picture and each reference picture in a database, which turns out to very efficiently realizable by OT.

## Reconstructing a shared secret in the presence of faulty shares – a survey

### Serge Fehr

The talk presented the state of art about the t-out of-n secret sharing schemes (SSS) with the usually privacy property (any t shares give no information on the shared secret) and the *robust* reconstructability property. The latter means that we want to be able to reconstruct the secret even if t of the n shares are faulty. Unconditional security is considered and the dealer is assumed to be honest. An interesting application of these schemes can be found in solving the problem of secure data storage.

It is well-know that if t<n/3, Shamir’s scheme (plus Reed-Solomon decoding) gives a robust SSS, while if t>=n/2 the reconstruction of a shared secret in the presence of faulty shares is not possible. So the interesting interval is when n/3 <= t < n/2, for which the talk showed three different solutions, all of them having the robust reconstruction property, but with some small failure probability.

The first one is due to Rabin and Ben-Or (1989) and is based on Shamir’s scheme: the shares are computed evaluating polynomials of degree <=t, but now, in the sharing phase, the dealer will sent to player P_i the share s_i together with the pairs (k_ij, y_ji) where the y_ji is a MAC of the share s_i under the key k_ji (for any j=1,2,…,n). In other words the player P_i will receive his share, the MACs of his share and the keys necessary to verify all the other shares in the reconstruction phase. In the latter, a share will be accepted iff it is consistent with at least t+1 MACs from other players and then the secret is reconstructed using just the accepted shares. This scheme has good computational complexity, poly(k,n), but the overhead in the share size is Omega(k*n), where k is the security parameter.

The second scheme presented, due to Cramer, Damgard and Fehr (2001), exploits the idea that together with the secret the dealer can share a random element and a relation involving both of them (e.g. sharing the secret s, a random element r and their product, rxs, using Shamir’s scheme). Then in the reconstruction phase, we keep trying to reconstruct from different sets of t+1 shares until we find out that the reconstructed elements satisfy indeed the given relation. This scheme presents a lower share size overhead, O(k + n), but its reconstruction phase is computationally inefficient.

Finally, the recent solution by Cevallos, Fehr, Ostrovsky and Rabani (2012) was presented. In this scheme the sharing phase is exactly the same as in the Rabin&Ben-Or protocol, except for the fact that MACs with smaller tags and keys are used. That is, |k_ij|,|y_ij| = O(k/n) instead of O(k). To compensate for the higher probability in cheating in the MACs the reconstruction phase is done in a cleverer way: whenever a share is rejected, the correspondent player is eliminated from the consistency check list and it will not be considered when determining if the shares from the others players have to be rejected. The CFOR scheme achieves short shares of size O(k + n log(n)) and runs in polynomial time. Serge showed the security proof in order to underline that it is non-trivial and non-standard. Indeed in the new protocol it is not clear which is the best strategy for an adversary and there is a circular dependency in the definition of an accepted share.

Finding robust SSS with O(k) overhead (which is the proven lower bound) and adapting the solution for non-threshold access/adversary structure are some of the open problems which were illustrated briefly at the end of the talk.

## Practical Private Database Querying

### Vladimir Kolesnikov

In this talk, Vladimir Kolesnikov describes BLIND SEER, a scalable platform for privacy preserving database querying. This is a joint work with a large team from Columbia University and Toronto University, whose name is in fact an acronym that sheds light over the platform functionality, BLIND SEER: BLoom INDex SEarch of Encrypted Results.

This platform combines a number of engineering tricks with fine tuned cryptographic techniques to achieve private database

querying without large overheads, which makes it practical.

Traditional databases use different query languages (usually variations of SQL) to select and retrieve individual records out of

large data stores. These query languages allow an user to select records according to a number of parameters and/or relations between different records. However, in performing these searches, the database learns which records the user retrieves and the parameters used for the queries. Such leaks might be a huge liability in applications involving sensitive data and also in outsourcing of database engines.

The BLIND SEER infrastructure consists basically of a server that stores decryption keys, a index server that stores encrypted data in a structured format, a policy enforcement server that examines queries (guaranteeing they don’t break access policies) and an end user. As Vladimir and the team tried to come up with BLIND SEER to counter this information leakage, the first challenge encountered was to define what constitutes leakage and what kind of leakage is meaningful. These turns out to depend on the exact application and, of course, to be limited by the crypto graphical techniques employed.

In order to prevent leakage of search parameters and data access patterns, BLIND SEER uses a Bloom Filter (BF). This mechanism allows for querying keywords in constant time while having the same data access pattern for *every* search. Hence, if the BF is masked with a random pad, the index server cannot determine the concrete search parameters and retrieved records by observing the queries (which are also masked) or differences between access patterns of different queries (since the pattern is always the same).

Yao’s garbled circuits are used to make it possible for a user to perform a “masked query” that searches for a given keyword over a

binary tree of masked bloom filters stored in the index server.

Basically, the user obtains the appropriate mask from the decryption keys server and runs a (garbled) circuit with the index server where it inputs the mask and the index server inputs the masked BF. This circuit outputs the plaintext index (or corresponding decryption key) of the retrieved record to the user and nothing to the server. The system also offers security against malicious users who might deviate from the protocol by means of the policy checking server, which runs a policy checking circuit to verify access policies.

Unfortunately BLIND SEER can’t fully prevent leakage of access patterns. Due to the exact way the search tree is constructed and

traversed, the tree search pattern and the record access pattern are revealed in some types of queries. Nevertheless, this is a price this platform must pay for its very high efficiency, which is proportional to the number of query matches and appears to be asymptotically optimal for some types of queries.

Vladimir presented an interesting comparison of a (unoptimized) implementation of BLIND SEER and MySQL. Amazingly, for most types of queries, BLIND SEER performs pretty fast, being not too slower than MySQL. It is important to notice that the implementation used for this comparison was very basic and not optimized at all (not even parallel), so better performance results can be expected with further optimized implementations.