Analyzing public sector incomes on the cloud using MPC

Introduction

Following the success of deploying secure multi-party computation for a secure double auction in Denmark in 2008 and for the financial reporting of the ICT sector in Estonia in 2011, Cybernetica has developed a new MPC application together with a web-based frontend that provides statistical analysis of incomes in public sector in Estonia.

The demonstration website was originally developed for the European Cloud Partnership Steering Board meeting held in Tallinn this summer. The main goal of this webpage was to show that MPC technology is mature enough to be used in real cloud applications that operate on sensitive data. Also, a web-based interface makes it accessible and convenient for a lot of users without the need for installing special software.

We are glad to announce that we have decided to make this website a permanent demo.

Click here to see it run

Technical details

The online demonstration is built on the Sharemind secure multi-party framework that uses three computation parties. The three nodes are deployed on three virtual machines provided by three distinct cloud service providers: Amazon EC2, Windows Azure and Zone Media. All the cloud instances reside in Europe.

As the application computes only average salaries, grouped by various categories, the MPC protocols used are quite simple and are thus not the most interesting part of the demo. What makes this demonstration unique, is the fact that the MPC computation is done in real time and it is controlled by a web frontend. The web application solution builds on the ICT sector benchmark analysis application deployed in Estonia in 2011. However, this time there is no caching and the report computation is initiated directly from the web page.

Instead of embedding an HTTP server inside the Sharemind servers, we opted to use a separate HTTP server in front of each node that connects to Sharemind via its normal protocol and forwards all commands from the web.
This allows us to use existing, stable web server technologies.

Out implementation uses Node.js as the proxy server and the socket.io JavaScript library on top of that to mimic two-way WebSocket-like communication between the client’s web browser and the proxy server.

Future work

We plan to update the application to use the Sharemind application server, a newer, faster version of Sharemind with support for more protocols. Stay tuned!

“The Sharemind team” (sharemind@cyber.ee)

(The technical work was done by Reimo Rebane and Riivo Talviste at Cybernetica, with DanBogdanov supplying the vision. Thanks go out to our collaborators in STACC and the e-Governance Academy.)

Advertisements

Secure Supply Chain Collaboration

This blog post aims to highlight a promising application of secure (multi-party) computation in business software.

Let me first summarize my view of the technical requirements and accomplishments of secure computation. If multiple parties have inputs (requirement A) and these inputs must be kept highly confidential (requirement B), then they can still safely collaborate (accomplishment A). There is a scenario in business operations where these requirements are met and the accomplishment is useful. In fact, the data is so sensitive that collaboration often does not take place in practice because of security concerns. In that way secure computation is an enabler of additional collaborations not practical previously. This scenario is supply chain collaboration (SCC).

What is the fundamental problem of SCC?

Companies produce goods and services (either to order from customers or to a planned stock level). For this they need to order supplies. The current process is as follows: A company determines how much it wants to produce, checks its supply and inventory and then places orders to its suppliers. This simple process proceeds all the way to the top of the supply chain where raw materials are sourced.

What is the fundamental problem with this approach?

It is long known that this mode of operation does not lead to an optimal use of resources. Each companies optimizes (locally) its use of capacity and stock, but the combination of locally optimal plans is rarely a globally optimal plan. In the entire supply chain significant resources are wasted which implies higher costs for consumers. You might have heard of the bull whip effect. The bull whip effect states that is inevitable in this mode of operation that the fluctuation of orders at the top of the supply is much higher than at the bottom of the supply chain. This implies that companies at the top of the supply chain need to maintain much larger capacities which binds capital and incurs significant additional costs.

What can you do to prevent the problem?

Companies along the supply chain need to exchange data. They need to engage in a collaborative planning process. Supply chain management has come up with a variety of such planning methods. They differ in the number of participating parties — two or many — and in the economic quantity to be optimized. A large scale example with multiple parties that optimizes production, warehousing and transportation is supply chain master planning. A medium scale example with two parties that optimizes production and warehousing is collaborative planning, forecasting and replenishment (known as CPFR). A small scale example that optimizes warehousing is the joint economic lot size (JELS).

How can secure computation help?

A common problem in SCC is that partners at not willing to exchange the necessary data, such as costs and capacities, for security reasons. They fear disadvantages in future collaborations, e.g. price negotiations, due to the insight into their price calculation. This is even often true for simple data exchanges, such as in vendor managed inventory. Therefore few of these schemes have found practical adoption so far. Supply chain researcher have come up with their own solutions, e.g. by using negotiation. Yet, these techniques rarely withstand a rigorous security analysis. Secure computation can implement these planning techniques provably without disclosing the input data. Hence, it may just be the technology that makes them acceptable in business practice.

What is the state of the art?

A number of specialized secure computation protocols have been proposed. The first one that initiated the idea was for CPFR (Atallah et al., 2006, M&SOM). A couple others came later, e.g. Pibernik et al. (2011, EJOR), address the problem of inference from the result of a secure computation of JELS. Even an attempt at something like supply chain master planning was undertaken (Kerschbaum et al., 2011, Computer). And, there are more and even more coming. Still, there are a couple of challenges left: First, as always, increasing the performance is a key challenge. Second, identifying the right computation (planning algorithm) to perform and the right computation model (cloud, etc.) to perform it in can be important for adoption in the market. This, of course, has an impact on which protocols are the fastest. Third, all aspects of security, such as malicious inputs or inferences from the result, etc., need to be addressed.

In summary, supply chain collaboration presents a major opportunity for wider adoption of secure computation due to its high confidentiality requirements. There are a number of challenges to be solved by the cryptography and business community and only their collaboration is likely to bring practically viable results.

Florian Kerschbaum

florian dot kerschbaum (at) sap dot com

Using Secure Computation to Avoid Satellite Collisions

If you follow this blog, you probably already know that MPC allows a set of parties to compute any function of their inputs without compromising the privacy of their data.

The number of real world situations where MPC can be used to solve apparently unsolvable problems is only limited by our imagination, as this recent video from the Estonian team behind the Sharemind project shows:

If you can’t watch the video, here is a short summary: the growing number of satellites orbiting the planet is increasing the danger of collisions. This is not only a theoretical scenario, and two satellites actually crashed in 2009. This could be avoided by sharing (exact) information about the satellites orbits. However, satellite owners are not willing to make the orbits of their satellites public.  But using MPC, the parties can cooperate and learn whether a collision is going to happen and nothing else!

More information can be found the Sharemind’s blog. If you want to know more about secure computation techniques you can visit the MPC Lounge (currently under construction).

Crypto is Dead; Long Live Crypto!

At last week’s RSA conference, in the Cryptographer’s Panel, Adi Shamir said that we need to prepare for a post-crypto world. The same basic idea was put forward by Jonathan Zittrain in his invited talk at last years CRYPTO conference, which was entitled “The End of Crypto”. The basic idea behind both statements is that in the modern world the attacker is already inside the box, so no matter how much encryption you use the bad guy will always get your data.

Let us elaborate on this idea a bit more. In the consumer field you may use your smart phone to store all sorts of important data on it (email addresses, passwords, banking data etc), but most users also download apps. Very few of these apps have had any form of code or security review, and so could contain malicious code which can compromise your phone, and your security.

In the enterprise space the problem is even more acute. The existance of APTs (Advanced Persistant Threats) means that enterprises need to assume that the attacker is already within their perimeter. This has always been true, in that often attacks come from the inside, but now we have seen external attacks which are based on dorment code, lying in wait, within an enterprises security perimeter.

The usual cryptographic model is that there is at least one honest person in a protocol. This is still true, the user of a system may be honest, but if we cannot trust the system they are on to not be compromised, we can essentially not trust anything. What is the point of encrypting data if the attacker already has the key? Even if the data has been encrypted successfully, at some point it needs to be decrypted so as to process it. At the point of decryption, if the attacker controls the machine, he also controls the decrypted data. So with such all pervasive threats, how can crypto be of any use what-so-ever? In some sense this means “Crypto is Dead!”.

Luckily, however, cryptography is about more than just encryption and authentication. Modern cryptography gives us a number of tools which we can use to secure systems. In Bristol we have been working for a number of years in turning a theoretical technology called multi-party computation (MPC) into a practical reality. The basic idea behind MPC is the following: Suppose a set of n parties have their own secret inputs, for example party i has input xi. Now suppose they want to compute some function on these inputs say

f(x1,…,xn)

MPC allows us to do this, via a protocol, in such a way that the parties learn the output of the function, but no one learns anything about the other parties inputs. Traditional use-cases which have been proposed for MPC are the sharing of databases etc.

However, MPC also allows one to solve the problem of what hardware to trust in an enterprise environment such as that described above. In other words we can use MPC as a threat mitigation strategy to avoid APTs and other malware on our systems. We do this by altering the standard use case of MPC, instead of having multiple parties we have one party who just happens to be using multiple computers.

Suppose we expect that an attacker is inside our network. If we put in place enough protection to ensure he is not on all of our computers at once, then we may be able to hold a secret key on that uncompromised computer. But we do not know which computer is uncompromised. Using MPC we can get around this. Consider the simplified situation where we have two computers, where we know it is likely one is compromised and the other is not. We now “split” the encryption key between the two servers, for example via a one-time pad. So each party essentially has a random string, but the two parties together when combining their random string have the secret key. We treat the two random strings as the inputs x1 and x2 above, and now consider f as the encryption function. In this way the two servers can perform the encryption, without anybody ever knowing the underlying secret key itself.

We could use the same idea for messages. Instead of having one server needing to decrypt some ciphertext, with the obvious problem of what happens when the server is compromised, a combination of servers could use MPC to decrypt the ciphertext and then use MPC to process the plaintext. In this way no single party ever sees the underlying key, and the underlying plaintexts, at all.

This might seem like pie in the sky, but we can actually perform such computations today, using protocols developed in Bristol and at other places around the world. For example when we first implemented the AES functionality in a covertly secure manner back in 2009 we could evaluate a single AES block encryption between two partes in 60 seconds. Now this can take as little at 0.012 seconds, and we can process around 1000 such encryptions per second. With these latencies and throughputs we can actually implement solutions which help mitigate against the problem of servers being compromised by APTs.

Of course MPC only acts as a threat mitigation defence in such situations. If all machines are compromised, then all bets are off. But that is the same as with all security mechanisms, one needs to combine them together to provide a secure system. But the key message is that with MPC one no longer needs to place ones secrets all in one place. So whilst the traditional cryptographic model may be dead in the modern world, the application of cryptography to address security problems is far from dead. So Long Live Crypto!

This post originaly appeared on the BristolCrypto blog.