CRYPTO 2014 rump session

The rump session at CRYPTO had plenty of talks in many subfields of cryptology, and secure computation was no exception. The part of the rump session concerned with MPC was started by Xiao Shaun Wang discussing “Circuit ORAM and on the Tightness of the Goldreich-Ostrovsky ORAM Lower Bound”. In particular he discussed an implementation (co-authored with T-H. Hubert Chan and Elaine Shi) of circuit ORAM, based on the JustGarble framework. This approach yields great improvements (up to a factor 48x on 1 GB data) over implementations based on path ORAM.
Xiao’s talk was followed by Kartik Nayak discussing “Oblivious data structures”. Kartik considers the problem of data access overhead using an ORAM. Specifically the overhead is data transferred in the oblivious case divided by data transferred in non-oblivious case. The best known result is O(log^2 (N)/ log log (N)) overhead [KLO’12]. Kartik and his co-authors (Xiao Shaun Wang, Chang Liu, T-H. Hubert Chan, Elaine Shi, Emil Stefanov, Yan Huang) manages to improve on this result for restricted access patterns. Such patterns include proximity and bounded degree trees. Such patterns reflect how data is generally accessed in most computer programs and algorithms. An open source implementation using garbled circuits as a backend is currently being implemented by the authors.
Xiao Shaun Wang returns to the stage to continue the talk on ORAM by presenting “SCVM: An Efficient, Automated RAM Model Secure Computation Framework”. The goal of SCVM is to make it possible for non-expert programmers to program secure computation tasks in a matter of hours. The SCVM system ensures security through a type system and is supposed to offer competitive efficiency to customized circuits for a large class of functions. More specifically, the system is as follows: the programmer makes a program in a source language which is then passed to a frontend compiler. The output of the compiler is then a SCVM intermediate representation which is then passed to a backend compiler. The compilers implement several compile time optimizations and the whole system is going to yield support for rich library functions such as data structures, machine learning and graph algorithms. Like in the previous presentations, the SCVM is based on ORAM using garbled circuits as a backend.
Next up was Wutichai Chongchitmate describing how to do “Optimally Resilient and Adaptively Secure MPC with Low Communication Locality”. The project (joint work with Nishanth Chandran, Juan A. Garay, Shafi Goldwasser, Rafail Ostrovsky, Vassilis Zikas) considers how to do MPC with many, many parties while retaining low communication complexity. Chongchitmate explains that this is possible to achieve, even in the case of a dishonest majority, controlled by an adaptive adversary. The core idea is to encode communication patterns into a symmetric key infrastructure, in particular having every party use his symmetric keys to decide on his set of neighbors. Then expander graphs are used to show that it is infeasible for an adaptive adversary to discover these patterns and disconnect two honest parties.
Afterwards Tore Frederiksen and Roberto Trifiletti discussed their work in progress (co-authored with Thomas Jakobsen and Jesper Nielsen) on practical optimizations of the LEGO paradigm for secure computation. The idea of Lego is to use garbled gates to get maliciously secure two-party computation, but instead of doing cut-and-choose on entire garbled circuits, cut-and-choose is done on individual gates. Non-checked gates are then combined to construct a single fault-tolerant garbled circuit. However, putting individual gates together is expensive so Tore and Roberto highlighted some new approaches for doing this, yielding much smaller constants than given in the previous Lego works.
The next speaker was Mike Rosulek who described a new webpage project for summaries of papers in MPC. In particular the summaries are short and easily readable and in particular aimed for first year Ph.D. students and other people with basic knowledge about crypto and an interest in MPC. Currently the site contains 30 paper and a glossary, but the hope is to keep the site community driven. If you want a look or submit a summery then navigate your browser to http://tinyurl.com/mpc-annotated.
The summaries above include the most MPC relevant talks, however, the rump session contained many other interesting talks with applications to MPC. If you wish to have a look at the titles and slides then navigate to http://crypto.2014.rump.cr.yp.to.

MPC Workshop(s) in 2014

It looks like 2014 will be an interesting year for MPC. Here is a list of interesting events, in chronological order:

The third edition of this events with focus on practical aspects of Cryptography will take place in New York on January 13-15. Unfortunately registrations have been closed but we still want to mention that MPC will be present at the workshop with the “Practical Multi-Party Computation” session on Monday afternoon.

We already blogged about this exciting event organized by Seny Kamara and Payman Mohassel, so here is just a little update: the list of talks accepted to the workshop is now available online. The event is free and registration is still possible.

Following the success of the 2012 edition, this year’s workshop will promote exchange between research on the theory and practice of MPC. Registration is free.

Analyzing public sector incomes on the cloud using MPC

Introduction

Following the success of deploying secure multi-party computation for a secure double auction in Denmark in 2008 and for the financial reporting of the ICT sector in Estonia in 2011, Cybernetica has developed a new MPC application together with a web-based frontend that provides statistical analysis of incomes in public sector in Estonia.

The demonstration website was originally developed for the European Cloud Partnership Steering Board meeting held in Tallinn this summer. The main goal of this webpage was to show that MPC technology is mature enough to be used in real cloud applications that operate on sensitive data. Also, a web-based interface makes it accessible and convenient for a lot of users without the need for installing special software.

We are glad to announce that we have decided to make this website a permanent demo.

Click here to see it run

Technical details

The online demonstration is built on the Sharemind secure multi-party framework that uses three computation parties. The three nodes are deployed on three virtual machines provided by three distinct cloud service providers: Amazon EC2, Windows Azure and Zone Media. All the cloud instances reside in Europe.

As the application computes only average salaries, grouped by various categories, the MPC protocols used are quite simple and are thus not the most interesting part of the demo. What makes this demonstration unique, is the fact that the MPC computation is done in real time and it is controlled by a web frontend. The web application solution builds on the ICT sector benchmark analysis application deployed in Estonia in 2011. However, this time there is no caching and the report computation is initiated directly from the web page.

Instead of embedding an HTTP server inside the Sharemind servers, we opted to use a separate HTTP server in front of each node that connects to Sharemind via its normal protocol and forwards all commands from the web.
This allows us to use existing, stable web server technologies.

Out implementation uses Node.js as the proxy server and the socket.io JavaScript library on top of that to mimic two-way WebSocket-like communication between the client’s web browser and the proxy server.

Future work

We plan to update the application to use the Sharemind application server, a newer, faster version of Sharemind with support for more protocols. Stay tuned!

“The Sharemind team” (sharemind@cyber.ee)

(The technical work was done by Reimo Rebane and Riivo Talviste at Cybernetica, with DanBogdanov supplying the vision. Thanks go out to our collaborators in STACC and the e-Governance Academy.)

Efficient Secure Computation at Eurocrypt 2013

There were multiple presentations about secure multi-party computation at Eurocrypt 2013. I’ll describe here only a few of these talks.

Tore Frederiksen presented work on “MiniLEGO: Efficient Secure Two-Party Computation From General Assumptions”, coauthored with Thomas P. Jakobsen, Jesper Buus Nielsen, Peter Sebastian Nordholt and Claudio Orlandi, all from Aarhus University, Denmark. This work builds on the LEGO paper of Nielsen and Orlandi from TCC 2009. That work presented a secure two-party protocol that applied the cut-and choose technique at the gate level, where one party generates many gates and the other party checks some of the gates and uses the others. A main technical challenge is how to connect together the gates that are chosen to be used. The original LEGO paper solved this problem by relying on a specific number-theoretic assumption and using public-key operations per gate. The new paper connects gates using a new XOR-homomorphic commitment scheme based on linear error correcting codes and oblivious transfer. The entire construction is therefore based on symmetric primitives, except for the few seed OTs needed to bootstrap the OT extension.

Saeed Sadeghian presented his work with Payman Mohassel on “How to Hide Circuits in MPC: An Efficient Framework for Private Function Evaluation”. The goal of this work is to obtain Private Function Evaluation (PFE), where privacy here means that the computation hides the function that is computed (in the sense that both the wiring of the circuit and the gates are kept hidden). Valiant have shown that every circuit with |C| gates can be computed by a universal circuit of size O(|C| log|C|). PFE can also be based on a fully homomorphic encryption scheme. Katz and Malka designed a two-party PFE protocol based on a singly homomorphic encryption, that uses O(|C|) public-key operations.
The new work presents a general framework in the semi-honest (passive) adversary setting. The results are for 2PC and MPC, and for both binary and arithmetic circuits. A major tool that is used is oblivious switching networks, that are based on the use of OT (which in turn can be extended and use mostly symmetric key operations). Private switching based on this tool is then applied to the Yao, GMW and CDN01 protocols.

Hoeteck Wee presented joint work with Dov Gordon, Tal Malkin and Mike Rosulek on “Multi-Party Computation of Polynomials and Branching Programs without Simultaneous Interaction”. The paper extends the results of  Halevi et al. on secure one-pass protocols, that introduced this type of protocols and showed how to implement them for computing  symmetric functions and choice functions (as well as presenting a generic but less practical protocol). The current work presents one-pass protocols for computing sparse multivariate polynomials, which can be used for computing statistic functions such as the variance, and for computing read-once branching programs, which can used for computing functions such as string matching, finite automata, and second price auctions. A major limiting factor of the new construction is that, unlike the previous work, the order of the participants needs to be known in advance, or computed on the fly.

Steve Lu presented his work with Rafi Ostrovsky titled “How to Garble RAM Programs”. This truly interesting work presents a secure two-party protocol where the two parties mimic a computation on a RAM machine, without converting the RAM programs into circuits. The protocol that is run is non-interactive. Each access to the RAM is implemented using oblivious RAM (ORAM), where the ORAM needs to access locations that depend on the index of the looked item and are unknown at “compilation” time. In order to achieve this property without interaction, the protocol implements at Step j a circuit that constructs the circuit for Step j+1 and encodes in it the locations that will be accessed.

New Fully Homomorphic Encryption Library

The idea of an homomorphic encryption scheme is not a new one.

Already one year after the introduction of RSA (the first public key encryption scheme discovered by the scientific community) it was noticed that RSA had the special property that if you multiply two ciphertexts together, then the resulting value would be a valid encryption of the product of the original plaintexts! In other words, by manipulating the ciphertexts (the random looking strings containing your data) you are manipulating in a meaningful way the content of the encryptions.

A “fully” homomorphic encryption scheme

As said, even RSA allows the user to perform multiplication directly in the encrypted domain. Other cryptosystems allow to perform additions in the encrypted domains. But for more than 30 years, we did not know how to construct a scheme that allows users to compute any function of encrypted values. That’s why Gentry’s breakthrough in 2009 has generated so much noise, even outside of the scientific community (e.g., Forbes, Business WeekForbes again).

Among other things, a fully homomorphic encryption (FHE) scheme allows to perform non-interactive secure computation, and in many applications this is crucial. The classic example is cloud computing: if you don’t trust your cloud provider with your data, you are in trouble: either you have to give away your private data in clear (running the risk that the cloud provider looks into possibly confidential data), or you have to encrypt the data before uploading it (losing the advantages of having the cloud computing for you). Another example is encrypted a spam filter: you like that your mailbox is not filled with junk, but you might not be happy about Google/Microsoft/etc. reading the contents of all your emails.

But if you encrypt your data with an FHE scheme, the cloud can compute on your data without looking at it!

Wow! When can I start using FHE?

The first scheme proposed by Gentry was insanely inefficient, and many did not believe we would see a working implementation of an FHE scheme for a long time. Fortunately, there are a lot of smart cryptographers around and in the last 4 years the efficiency of FHE schemes has been improved by several orders of magnitude.

To be concrete, in 2010 the available FHE schemes were so inefficient that they would simply not run on any hardware. At Crypto 2012 Gentry Halevi and Smart showed that it “only” takes 36 hours to evaluate AES in the encrypted domain. After one year this can already be done in under 3 hours! (If interaction is not a problem for your application, you could perform the same task using garbled circuits/oblivious transfer/etc. in a few seconds instead.)

HELib

The last result was obtained using a new library, recently released under GPL license by Shai Halevi and Victor Shoup. From the author announcement:

At long last, Victor Shoup and myself finally open-sourced (under GPL)
the HE library that we have been working on for the last year and
something. I created a project for this library on github, see

https://github.com/shaih/HElib

At this point, the documentation of this library is rather thin. You can
find a design-document under the doc sub-directory, and most modules
include fairly extensive comments in the source code itself, but that’s
pretty much it.

[…]

To get an idea for the performance of this library, I attach a short
excerpt from a presentation that I gave a few months ago. We should
probably be some 25% faster now than we were back then, but this gives
the right ballpark estimate.

We would very much appreciate comments, and would be even happier
if any of you wants to join us in developing this library.

— Victor & Shai

There is still a long way before you can start using FHE for your everyday problem, but these improvements are amazing and we should be grateful to Shai and Victor for making their work public.

Links

Secure Supply Chain Collaboration

This blog post aims to highlight a promising application of secure (multi-party) computation in business software.

Let me first summarize my view of the technical requirements and accomplishments of secure computation. If multiple parties have inputs (requirement A) and these inputs must be kept highly confidential (requirement B), then they can still safely collaborate (accomplishment A). There is a scenario in business operations where these requirements are met and the accomplishment is useful. In fact, the data is so sensitive that collaboration often does not take place in practice because of security concerns. In that way secure computation is an enabler of additional collaborations not practical previously. This scenario is supply chain collaboration (SCC).

What is the fundamental problem of SCC?

Companies produce goods and services (either to order from customers or to a planned stock level). For this they need to order supplies. The current process is as follows: A company determines how much it wants to produce, checks its supply and inventory and then places orders to its suppliers. This simple process proceeds all the way to the top of the supply chain where raw materials are sourced.

What is the fundamental problem with this approach?

It is long known that this mode of operation does not lead to an optimal use of resources. Each companies optimizes (locally) its use of capacity and stock, but the combination of locally optimal plans is rarely a globally optimal plan. In the entire supply chain significant resources are wasted which implies higher costs for consumers. You might have heard of the bull whip effect. The bull whip effect states that is inevitable in this mode of operation that the fluctuation of orders at the top of the supply is much higher than at the bottom of the supply chain. This implies that companies at the top of the supply chain need to maintain much larger capacities which binds capital and incurs significant additional costs.

What can you do to prevent the problem?

Companies along the supply chain need to exchange data. They need to engage in a collaborative planning process. Supply chain management has come up with a variety of such planning methods. They differ in the number of participating parties — two or many — and in the economic quantity to be optimized. A large scale example with multiple parties that optimizes production, warehousing and transportation is supply chain master planning. A medium scale example with two parties that optimizes production and warehousing is collaborative planning, forecasting and replenishment (known as CPFR). A small scale example that optimizes warehousing is the joint economic lot size (JELS).

How can secure computation help?

A common problem in SCC is that partners at not willing to exchange the necessary data, such as costs and capacities, for security reasons. They fear disadvantages in future collaborations, e.g. price negotiations, due to the insight into their price calculation. This is even often true for simple data exchanges, such as in vendor managed inventory. Therefore few of these schemes have found practical adoption so far. Supply chain researcher have come up with their own solutions, e.g. by using negotiation. Yet, these techniques rarely withstand a rigorous security analysis. Secure computation can implement these planning techniques provably without disclosing the input data. Hence, it may just be the technology that makes them acceptable in business practice.

What is the state of the art?

A number of specialized secure computation protocols have been proposed. The first one that initiated the idea was for CPFR (Atallah et al., 2006, M&SOM). A couple others came later, e.g. Pibernik et al. (2011, EJOR), address the problem of inference from the result of a secure computation of JELS. Even an attempt at something like supply chain master planning was undertaken (Kerschbaum et al., 2011, Computer). And, there are more and even more coming. Still, there are a couple of challenges left: First, as always, increasing the performance is a key challenge. Second, identifying the right computation (planning algorithm) to perform and the right computation model (cloud, etc.) to perform it in can be important for adoption in the market. This, of course, has an impact on which protocols are the fastest. Third, all aspects of security, such as malicious inputs or inferences from the result, etc., need to be addressed.

In summary, supply chain collaboration presents a major opportunity for wider adoption of secure computation due to its high confidentiality requirements. There are a number of challenges to be solved by the cryptography and business community and only their collaboration is likely to bring practically viable results.

Florian Kerschbaum

florian dot kerschbaum (at) sap dot com

Using Secure Computation to Avoid Satellite Collisions

If you follow this blog, you probably already know that MPC allows a set of parties to compute any function of their inputs without compromising the privacy of their data.

The number of real world situations where MPC can be used to solve apparently unsolvable problems is only limited by our imagination, as this recent video from the Estonian team behind the Sharemind project shows:

If you can’t watch the video, here is a short summary: the growing number of satellites orbiting the planet is increasing the danger of collisions. This is not only a theoretical scenario, and two satellites actually crashed in 2009. This could be avoided by sharing (exact) information about the satellites orbits. However, satellite owners are not willing to make the orbits of their satellites public.  But using MPC, the parties can cooperate and learn whether a collision is going to happen and nothing else!

More information can be found the Sharemind’s blog. If you want to know more about secure computation techniques you can visit the MPC Lounge (currently under construction).