Minutes from TCC: Obfuscation, Tax Evasion and The Cosmic Cube

This week I attended TCC 2014 at UCSD. This is a very limited report of what happened there. Many other things would be worth writing (some of these can be found on the Bristol Crypto Group, such as a very interesting talk about fairness in secure computation by Gilad Asharov), but my (and your) attention span are limited.


Sex! God! Did I get your attention now? (Seen at UCSD)

Sex! God! Did I get your attention now? (Seen at UCSD)


The first session of TCC was dedicated to program obfuscation. Clearly being at TCC, this is about cryptographic obfuscation, and not the kind of code obfuscation you can see in the picture below. Cryptographers started dreaming about the ability of obfuscating program since the invention of public key cryptography, as this has countless cryptographic applications.

If you are here you are probably interested in secure computation, so you can think of obfuscation as non-interactive private function evaluation: Alice obfuscates a Boolean circuit implementing some function and gives it to Bob, who should now be able to evaluate on any input of his choice x, without learning nothing more about f than what is leaked by the output f(x). This is what we call virtual black box obfuscation (VBB).

One of the 2004 winning entries at the International Obfuscated C Code Contest (source: http://www.codinghorror.com/blog/2005/05/obfuscating-code.html)

Virtual Black-Box Obfuscation

In the first talk of TCC Zvika Brakerski presented a candiate VBB obfsucator for all circuits. This is odd, since Barak et al. showed in 2001 that there exist no VBB obfuscator for all circuits. However there is no contradiction here, since Zvika’s result hold only in an idealized model where the adversary only has “oracle access” to the underlying cryptographic primitive, and the impossibility result does not hold in this model. There are two ways of looking at this result: if you like your glass half empty, this result shows a limitation of results in the “generic group model”, whereas if you like your glass half full, you can think that Zvika’s result is a step forward in understanding how to VBB-obfuscate everything that can be VBB-obfuscated.

Weaker Notions of Obfuscation

The session had two more talks on obfuscation. Marcel wrote about the second one on the Bristol Crypto Group Blog, so I will skip directly to the third one by Elette Boyle, on extractability obfuscation. Since Barak et. al showed that VBB obfuscation is impossible, it makes sense to study other (weaker) security requirements for cryptographic obfuscation. The weakest of them all is called indistinguishability obfuscation (iO): this notion only guarantees that it should be hard for an adversary to distinguish between the obfuscations of two circuits C1, C2, if C1 and C2 output the same values on every input. At a first look, this might seem a silly definition of security: if the two programs output the same value on the same inputs, what is left to be hidden? In fact, if P=NP one could trivially construct an obfuscator satisfying this notion by outputting “the first” (in some lexicographic order) circuit that is consistent with C1 and C2 on every input. However, it turns out that iO can be very useful in cryptographic applications. My intuition for this is that sometimes it is hard to know what function a given circuit is implementing. Here is a simple example: let PRG be a cryptographic pseudorandom generator that expands a k-bit seed into a 2k-bit long string, and let C_p be a circuit parametrized by a string p that on input s outputs 1 iff p=PRG(s). Now, does this circuit ever output 1? Deciding if this circuit ever outputs 1 is equivalent to decide whether is in the support of the PRG or not, thus breaking the PRG. Therefore, one can construct a circuit C1 that outputs 1 only a given input s that is indistinguishable to a circuit C2 that always outputs 0 using only iO obfuscation and PRGs (this is almost literally what we did with Antonio this paper on circular security).

After this long introduction, I can tell you about the new notion of obfuscation that Elette presented, called extractability obfuscation (eO), that guarantees that the only way of distinguishing an obfuscation of C1 and an obfuscation of C2, is by finding an input x where C1(x) is different from C2(x). This notion can be seen as a natural relaxation of iO in the following sense: in iO there exist no inputs on which the two circuits have different outputs, while in eO the only way that the adversary can distinguish is by finding an input on which the circuits have different output. Moreover, eO is weaker than VBB-O and therefore can be potentially achieved under weaker assumptions. Elette finally showed some applications of eO, where it is not clear if one could achieve the same result using iO.

MPC from Obfuscation

This is the MPC Lounge, so let’s talk about MPC. Shai Halevi showed how to use obfuscation to construct 2 round MPC protocols. It is clear that one cannot do MPC in one round: the best one can do in one round is to hardwire one’s input in the circuit, obfuscate it and then give it away. Say we want to compute a function f and my input is a, now I can obfuscate g(x) = f(a, x) and give it to you. However now you can evaluate g(x) multiple times on different inputs, and this clearly leaks more information about then a single execution of the MPC protocol. The next best things is two rounds and Shai showed how to do it using a clever combination of FHE, obfuscation and NIZKs. The main idea is that the first round commits every party to their inputs and randomness, and then one can obfuscate the next-message function of any multiround MPC protocol. Clearly this protocol is far from being useful in practice, but it is a very interesting feasibility result.

Invited talk: Silvio Micali

The 2012 Turing-award winner Silvio Micali delivered the first invited talk. Silvio, who laid the foundations of modern cryptography by formally defining security for encryption schemes (semantic-security) and cryptographic protocols (zero-knowledge and the simulation paradigm), is now interested in the field of mechanism design. I find mechanism design to be a very interesting field, and it is in some sense very related to MPC, as many noticed before me. The first slide of every MPC talk shows a number of parties delivering their inputs to a trusted-party, and then they show how to replace the trusted third party with a cryptographic protocols. But we (almost) never worry about which functions the parties are computing, and what inputs the party use! In mechanism design we study how to construct algorithms (mechanisms) that “force” parties to be behave nicely, in the sense that each party has no incentive in deviating from the protocol rules. The main example of this are auctions, and Silvio started by a very enthusiastic explanation of first vs. second-price auction, and how in second price-auction (assuming that participants are rationals), everyone bids exactly their true valuation of the items.

However, “once a cryptographer, always a cryptographer” as Silvio puts it: traditionally mechanism design assumes that parties are rational and do not collude and that, moreover, they do not care about privacy. Therefore Silvio thinks that we (cryptographers) should get involved in this field. I was extremely happy to hear this, as I already started thinking about the connection between privacy and mechanism design.

Sivio showed with a great example why collusion is a problem. Suppose that the tax authority uses the following mechanism: the tax authority introduces a rule where, if A says that B paid his taxes, B is not going to be audited. This is desirable, as the tax authority spends money to audit citizens, so if they can avoid auditing B they save money that can be used to build schools/hospitals/etc.. Also, this mechanism is rational: from A point of view, it would be irrational to say that B paid his taxes if he did not. It is in A’s interest that B pays his taxes (as a small fraction of B’s taxes pays goes into services that A uses). However, anyone can see that if this was implemented in the real world, then A and B would just collude: A says B paid all his taxes, B says that A paid his taxes, and no one pays taxes anymore. Silvio went on showing some recent mechanisms he has been working on, that leverage on other parties knowledge and are resilient to collusion (while offering some degrees of privacy).

Invited Talk: Russel Impagliazzo

Russel Impagliazzo was the second invited speaker. Russel gave an amazing talks that made us reflect and think about deep, fundamental questions about what we are doing and where are we going as a scientific field. Russel talked about general versus specific hardness assumptions in cryptography. This is a very actual topic, as in the last few years plenty of novel cryptographic assumptions have been introduced, to allow to implement functionalities that were previously thought impossible such as fully-homomorphic encryption, multilinear maps and software obfuscation.

One of his first slide read: “cryptographers say problems are hard unless there’s a reason they are easy, while heuristic designer believe problems are easy unless there is a good reason they are hard”. However, Russel warns us against this, as history shows that some plausible assumptions have been broken, and that it is often unclear how much cryptanalitic effort has gone into trying to break the assumptions we make up. This does not mean that one should not dream big — according to Russel, even if at the end we find out that obfuscation is indeed impossible and all candidate multilinear maps are broken, this research line is still important, as it would still deepen our understanding of computation.

Are we building our cryptographic protocols on solid assumptions? (Do Ho Suh, Fallen Star - Stuart Collection - UC San Diego)

Are we building our cryptographic protocols on solid assumptions? (Do Ho Suh, Fallen Star – Stuart Collection – UC San Diego)

Russel sees both generic assumptions (one-way function, trapdoor permutations, …) and specific assumptions (RSA, discrete logarithm, …) as important. Looking at specific assumptions allow to think about coincidence and functionalities (e.g., RSA is malleable, let’s compute on encrypted data!) while generic assumptions is important both conceptually and pragmatically (oh no, RSA is broken! No worries, just plug in a different one-way function!)

Russel also makes us think about how we evaluate generic assumptions for plausibility:  are they minimal (aka we do not have a choice, we need one way function if we want symmetric cryptography)? are there many examples believed secure? Are they different looking? And, if the assumption fails, what happens? Would the world be too good to be true? Russel (et al.) showed that if one way functions do not exist, then average case generic learning is possible. Can we do the same for other assumptions? If there are no protocols for key agreement (in a strong sense), can we then extract all the hidden meaning in any conversation? If obfuscation does not exist (in a strong sense), can we understand something more about circuits (SAT, lower bounds)?

Another very interesting question that Russel asked is: where does the assumption end? Is there a ultimate cryptographic function that allows for all functionalities? Or, as Russel puts it, a “cosmic cube” for cryptography? Or are there properties that are in conflict, so that we will always need more than one assumption? 

The cosmic cube (from wikipedia). I had to google this.



WAMPC, Day 2, Afternoon

A delegation of the Aarhus Crypto Group (Bernardo DavidTore FrederiksenClaudio OrlandiTomas Toft and Roberto Trifiletti) is attending the Workshop on Applied Multi-Party Computation and will attempt to cover this event on the MPCLounge.

We decided that timeliness is more important than perfection, and we therefore apologize for any mistake in these posts!

This is the last of a 4 post series.

First Session: Panel on the Business Case for MPC

Seny Kamara, Javier Salido, Vladimir Kolesnikov, Florian Kerschbaum

What are is the killer app for MPC?
The discussion starts by Florian considering the fact that companies do not want to trust cloud providers to outsource database storage or computations. This might be a setting where MPC could shine. The discussion is continued by Javier considering the privacy sensitive data a company holds. It turns out that companies in general only desires to obey the law in regards to storage of private data, which leads to the consideration that MPC might need to be forced upon the companies by legislation. However, legislations are made by lawyers, who do not know about MPC and thus it might be desirable to increase the amount of communication with lawyers.

The discussion continues by pointing out that companies might also need to use MPC in other settings, either as services directly to costumers or by the need to securely compute on data owned by other companies. Unfortunately when you start talking about MPC, ZK and commitments, IT folks at companies will start asking questions as they are only familiar with hashing, encryption, PKI, etc. and not protocols or advanced primitives. So it is very hard to convince them that MPC actually works and is secure. Furthermore, it is not trivial to use and implement crypto, and history teaches us that to avoid disasters one should not allow amateurs to implement crypto. Yehuda points out that real world projects and applications using MPC are on their way. Still, Florian points out that there are applications for existing companies which could be solved using MPC not being done because putting out new products is much harder than taking existing products and adding a layer of privacy and security.

Are there parts of the world that are better to get to adopt MPC?
The discussion starts by considering whether or not the Average Joe cares about the privacy of his personal data. It seems to depend on what the data is and how it is going to be used. Unfortunately it is not always clear what is being logged and how it is being used. Furthermore, it also depends on a cost/benefit analysis: what do I gain by giving away my private data versus what is the risk of giving away that data.  However, most people don’t think about that. Furthermore, it is hard to know what companies actually learn about you and what they use it for. So it will not be something that the average Joe is thinking about. Still, some places of Europe might be better at facilitating the usage of MPC compared to the US, due to a tradition of higher regulation of different fields (consider the Danish sugar beet auction).

What can we learn from other communities that desire security, e.g. databases.
It seems that other communities directly see the “money” and the products when they add security features. Javier points out that this was actually also the case for the crypto community in the 80’s where banks started to do electronic communications between each other. It seems that the conclusion is that we should consider direct products and features when considering the design of MPC.

What are the objections and roadblocks you (the panel) have met trying to promote MPC?
Javier experienced that people don’t seem to get the powers of MPC, and they only seem to be interested if they have an immediate problem that can be solved with MPC. So privacy and security is generally not something they desire to pay for. Vladimir experienced that the extra cost imposed by MPC is the main issue when discussing MPC with engineers. For them every cent matters. Florian claimed that you need a clear and concrete problem for people to actually desire to use MPC. Even if the company can actually save money by doing MPC it might not be enough if they already make money on an insecure system. You need to wait for companies to have a security problem and then quickly sell them MPC.

Second Session: Data-Oblivious Computation

How to Implement (ORAM in) MPC

Marcel Keller

Marcel’s talk was the first one to look at protocols based on secret sharing for arithmetic circuits, in a workshop otherwise dominated by Boolean circuits and garbled circuits. Marcel started by giving a brief overview of the celebrated SPDZ protocol. SPDZ is a offline/online protocol: in the offline phase many multiplications triples are generated in an efficient way  using (low) leveled FHE. As the preprocessing is independent of the the actual function/input (using the Beaver rerandomization trick) the preprocessing itself is very efficient and parallelizable. By turning this into an efficient protocol in practice is another story, and Marcel presented a number of issues that were addressed in his CCS 2013 paper such as how to compile a program into an arithmetic circuit with the lowest depth and highest amount of parallel operations, etc. 

Then Marcel went on to describe some recent results on implementing ORAM in MPC. Oblivious RAM is a very fascinating cryptographic primitive. Here is the scenario: a processor (with a small cache) wants to use an external, untrusted memory. Using “bread and butter” crypto it is possible to encrypt and authenticate the data stored on the RAM, so that the a corrupted memory cannot read/change the data read/wrote by the processor. But still the access pattern of the processor to the RAM might leak information about the flow of the program and its secret state! A trivial solution is to simply read the whole memory every time, and the goal of ORAM protocols is to do this with lower communication complexity. At least in theory, combining ORAM and MPC allows to securely evaluate RAM program, and that in many cases might be preferable to the standard “circuit” model of computation. Marcel went on showing how choosing the right ORAM (tree-based ORAM or path-ORAM by Shi et al.) and MPC protocol (SPDZ), this can actually be done efficiently, with applications to useful algorithms such as Dijkstra’s algorithm.

Secure Computation with Random Access Machines

Mariana Raykova

Mariana started again by saying how the RAM model of computation is more natural then the circuit based one, and it is in fact the model we use when we write code in our favourite programming language.

Moreover, if we wants to perform the same computation over and over again on the same input, using ORAM one can achieve sublinear secure computation (that is otherwise impossible using circuit evaluations). Think of a search in a sorted list: this can be done with only log n access to the ram, while if one uses a circuit the whole input must be read every time.

Mariana went on explaning a result that I found extremely fascinating: it is possible to perform an oblivious search in a sorted list using only 1 ORAM access! This is a really cool result and it would seem impossible (binary search requires log n in the plain!). But ORAM access already include a polylog factor, so by a clever organization of the memory and because the ORAM already has a tree-structure it is actually possible to do binary search with only one access.

The take home message is to try to integrate computation in the ORAM in RAM-based-MPC, and don’t discard low degree FHE, it can be useful and practical.

Obliv-C: A Lightweight Compiler for Data-Oblivious Computation

Samee Zahur

It should be easy for researchers to implement MPC, they should not have to reimplement their own compiler. Samee promises he is not going to just add a new standard to the mess, but he will actually solve some problem.

From xkcd.com

If you already have a C program it should be easy to wrap it into his language. Programmer have type system, you can define which variable will be hidden. Obliv-C should help benchmarking different protocols for the same problem, and easy prototyping. Performances are competitive, but the project still has some rough edges and missing features (floating point, logical operators, error reporting).

The project lives here: Obliv-C.

Third Session: Asynchronous & Broadcast-Efficient MPC

Broadcast (and Round) Efficient Secure Multiparty Computation

Juan Garay

This work deal with information theoretic MPC and how to achieve it efficiently. The talk introduced the share-compute-reveal paradigm of BGW and CCD and explains that multiplication of shares is the major bottleneck of these protocols and this requires communication, in particular broadcasts. Next the well know trick by using Beaver multiplication triples was explained which reduces the cost of mult. gate reduced to one reconstruction of a Verifiable Secret Sharing Scheme.

The goal of the work is to reduce the number of broadcast rounds for MPC for n > 2t, while minimizing overall no. rounds. The results they achieved is a VSS scheme with two broadcast rounds and constant overall rounds, which is the first VSS with these features. Their construction uses as building blocks a weak secret sharing scheme and information checking (an information theoretic signature like functionality). This enables them to get a 3 broadcast round VSS and they reach the 2 broadcast round scheme by using modercast to remove one of the broadcasts.

Due to time constraints the last part of the talk was quickly covered, but as a second result constant round pseudosignatures were achieved and together with the above VSS this can be combined in a general efficient MPC protocol.

Asynchronous MPC with t < n/2 Using Non-equivocation

Aniket Kate

The presentation considers how to achieve actively secure MPC where at most half of the parties are corrupted under the assumption that corrupted parties use non-equivocation. Non-equivocation is the case where a corrupt party supposed to do a broadcast, does not send the broadcast message to all the other parties. In particular he does not send anything for some of the parties. The specific contribution of the talk is an actively secure asynchronous MPC protocol with at most t<n/2 corrupted parties with O(n^3k) communication complexity assuming access to threshold signatures and non-equivocation. This improves on the result of [BHN10] which has communication complexity O(n^4k) and needs the additional assumptions of threshold homomorphic encryption.
The overall idea is first to construct transferability of non-equivocation using signatures. Specifically we add a signature to the messages that needs broadcast, so the party getting a broadcast message can then send it to the parties who did not get the message, which can verify the message using the signature. Using this a MPC protocol is constructed based on “supervised” Beaver triplets. These triplets reduces to “supervised” sharing, which is almost the same as verifiable secret sharing. These can then be constructed “relatively” easily.

Quorums Quicken Queries: Efficient Asynchronous Secure Multiparty Computation

Mahnush Movahedi

In the last talk of the workshop more Manush presented an asynchronous algorithm fro MPC in the setting of honest majority. On day one a different protocol based on quorums (also coauthored by Manush) was presented. The main difference seems to be that while in the first talk the authors were willing to settle with computational security, in today’s talk we are looking at an unconditional protocol. The setting is n players, and security should hold for any static and malicious adversary that corrupts up to 1/8 of the players. Using this protocol, it is possible to evaluate any function f over a field that can be computed by a circuit with m gates with communication and computational complexity O (m/n + sqrt n).

Some of the challenges in constructing protocols for asynchronous MPC is that one cannot wait for all inputs (or the adversary could stall the computation), so one needs to proceed after the threshold is reached. Manush presented a new data structure to efficiently count the number of inputs, called t-counter, which is roughly a binary tree, where messages are received in leafs and propagated up the tree. Every node only sends logarithmic number of messages

The MPC protocol proceeds by generating quorums of parties, and assigning each gate of the circuit to a specific quorum.

WAMPC, Day 2, Morning

A delegation of the Aarhus Crypto Group (Bernardo DavidTore FrederiksenClaudio OrlandiTomas Toft and Roberto Trifiletti) is attending the Workshop on Applied Multi-Party Computation and will attempt to cover this event on the MPCLounge.

We decided that timeliness is more important than perfection, and we therefore apologize for any mistake in these posts!

This is the third of a 4 post series.

First Session: Invited Talk

Secure Computation in 2029: Boom, Bust, or Bonanza

David Evans

David Evans

David Evans

David decided to use his invited talk not to present his recent results, but instead to try to make predictions about the future of MPC adoption. David started with justifying the use of the year 2029 in his title: apparently heuristics research in astrophysics shows that scientific development doubles every 15 years. David played a similar game with MPC using Google Scholar, showing that around 700 papers where ever published on MPC, while only 160 were published until 1999. On can go further back and find out that 7 papers were published up to 1984 (while the count is 0 up to 1969, except for a few hilarious misreported entries, according to which Payman Mohassel was already working in the field in 1908!).

Anyway, the take home message is that according to David MPC will evolve dramatically in the next 15 years.

Jumping forward to 2029 predictions: The US invested around $100M in MPC research, which is almost the same as being spent on the arts every year, or how much it costed to clean Virginia up from snow last week. Of course there’s some expectation on a cash-back, but what can be expected as acceptable results? For researchers getting papers published and making sure students are hired after graduating is important, but to the public an industrial impact is expected. During the talk David makes a number of claims and they are listed as they appear in the presentation.

Claim #1
The MPC industry will be bigger than anti-malware industry in 2029 (but is unclear if this will have to do with the MPC industry exploding, or the anti-malware shrinking).

Claim #2
High cost is no longer the main impediment to widespread MPC use.

Next David moves to explain the genetic dating problem, which is the traditional dating problem, but instead of keeping your preference private you want to keep your genetic information private. The goal is still to see if there’s a match, so the two are in no way exclusive. In fact they supplement each other perfectly. There is an (non-MPC) mobile app for the genetic dating problem in Iceland, and is clearly needed in such small communities:

"If I would have had this app last year I probably wound't have gone home with my cousin"

“If I would have had this app last year I probably wound’t have gone home with my cousin”

Continuing on this topic David noted that the cost to sequence the human genome went down drastically in the timeframe 2008-2013 from $10M to $0.01M. (up until then the decrease was more steady, following more or less moore’s law). Rephrasing this in MPC terms we have that in 2004 it could be done with FairPlay for a price of $100M, while in 2013 it can be done with JustGarble costing only $0.001M. Moving on to costs that still matter, we have that for 3 or more parties costs are still way too high. On a different note MPC actually requires around 10.000x as much energy as computation in the clear. Other factors of importance which needs to be addressed is to make MPC meaningful to the end-user and develop methods to reason about how much parties learn from the output of the computation.

In the final part of the talk David talked about two of his own projects: the MPC mobile apps part of the “Might be Evil” project, as well as a quite different project.

Second Session: Databases

Practical Private Database Querying

Vlad Kolesnikov and Tal Malkin

Vlad and Tal talked about Blind Seer — an implementation for performing database queries on large datasets. The core goal is extreme efficiency: at most an order of magnitude slower than running MySQL on known data. To allow this, they allow limited leakage, which ensures the possiblity of sublinear solutions. The amount of leakage is hard to quantify, but parties will learn no information on the plaintexts beyond the patterns, hence it will be acceptable for many scenarios.

The construction is based on Bloom-filters. A server holds a search tree in encrypted form. Starting at the root, the client and server use a Yao circuit to determine which sub-trees need to be queried. This is revealed, and the parties continue recursively until they either reach the leaves or find that no sub-trees contain. Naturally, this leaks information, e.g., and AND-query may be pruned, while this is never the case for an OR-query. Efficiency is very
good, theoretically it can be proportional to #matches for the best terms. In practice, most queries are at most a factor of two from MySQL.

Some queries require additional hiding such as whether any elements where found (0-1 set size indistinguishability), e.g., whether an airline passenger list contains a terrorist. This cannot be concealed (in the cryptographic sense), however, when the probability of “bad events” is low, dummy paths can be followed to ensure that server only learns that the probability of a bad event is increased.

Practical linking of databases using secure multiparty computation

Riivo Talviste

Riivo talks about the scenario when a state is interested in datadrive decission making. The goal is to combine different, sensitive data sources, but avoid the risk of doing this. The concrete example is analysis of income in the public sector. This has been implemented and can be demoed here.

A second, practical application considers the negative impact of being employed. During studies. Combining income data (from the tax office) with data on education, one can compute, e.g., the probability of a studen dropping out if she is employed from her second year.

Traditionally, this requires approval from data protection agencies however, even if permission is given, this can be to anonymized data, which can lead to data loss. An actual example of 76-98% is given for the scenario. However, when using MPC this is not an issue, since the agency allows using non-anonymized data in this case.

A large set of statistical functions have been implemented. The background is interviews with various domain experts — when questions “what functions are interesting?”, the typical reply was “Statistics!”

Third Session: Server-Aided MPC

PartialGC: a system for saving and reusing intermediate garbled circuit values

Benjamin Mood

This work consisted of trying to come up with ways of reusing intermediate values in garbled circuits. This included transforming output wires from the first circuit into input wires of later circuits using these values. A scheme was proposed for doing this transformation and also a way of checking that this was indeed done correctly so no cheating occurs.
Next the actual evaluation is outsourced to a high performance cloud server using same techniques as in the work of “Secure Outsourced Garbled Circuit Evaluation for Mobile Devices” by Henry Carter, Benjamin Mood, Patrick Traynor, and Kevin Butler from Usenix 13′.
Finally comparison from previous schemes were presented and this scheme achieved speedups of 8.7x, 15x, 23x speedup for 64 circuits and 11x, 19x, 32x speedup for 128 circuits for 64, 128, 256 keysize.

Whitewash: Outsourcing Garbled Circuit Generation for Mobile Devices

Henry Carter

Mobile devices are present in every moment of our lives: accessing emails, instant messaging, social networks, etc. Hence, it is interesting to consider running secure computation applications in such devices. However, computational resources in mobile devices are seriously constrained, making it impractical to have them run current protocols for secure computation, such as garbled circuit based protocols.

The goal of Whitewash is to enable two smartphone users to securely compute functions using garbled circuits with the help of a remote “cloud”. To achieve this goal, smartphones outsource circuit generation to a cloud provider in such a way that no information about the inputs or outputs is leaked. The scheme also outsource the necessary oblivious transfer steps that might overload a smartphone’s resources.

Basically, the mobile device exchanges random seeds with the cloud, who then uses it to generate garbled circuits. The mobile device, the cloud and the second party involved in the computation then commit to their inputs and information used to verify the correctness of the final computation results later on. Throughout the computation, the data is protected by “blinding keys”, which are used in the end to retrieve the output.

As for efficiency improvements, the highlights are: no OT nor consistency checks performed by the mobile device and strong resistance against collusion by the mobile device and the cloud, as well as collusion by the other party and the cloud. Such improvements are verified through comparisons with previous protocols evaluating circuits for hamming distance, matrix multiplication and RSA on standard smartphones. The results show exciting improvements in runtime. Moreover, the results show that OT accounts for most of the computational effort involved.

WAMPC, Day 1, Afternoon

A delegation of the Aarhus Crypto Group (Bernardo DavidTore FrederiksenClaudio OrlandiTomas Toft and Roberto Trifiletti) is attending the Workshop on Applied Multi-Party Computation and will attempt to cover this event on the MPCLounge.

We decided that timeliness is more important than perfection, and we therefore apologize for any mistake in these posts!

This is the second of a 4 post series.

First Session: Panel on Theory vs Practice

The first panel of the workshop, titled “Theory vs. Practice of MPC” featured Yehuda Lindell, David Evans and Payman Mohassel as session chair. Three main topics were brought up, listed below

– How do we increase collaboration between different related communities outside crypto?
Yehuda started out expressing that the theory and practice communities within crypto work pretty well together. However we cannot expect MPC research from other communities. But compiler optimization, hardware enhancements (AES-NI) are welcome.
There was discussions on the probability on getting more cryptographic primitives embedded in standard hardware, but the overall consensus was that this will probably not happen anytime soon. However specialized hardware is being made for e.g. FHE.
Next the discussion turned to trusted hardware and it was noted that trusted hardware in 2014 probably shouldn’t be trusted blindly given recent events.

David made the remark that to get people to use and push MPC, they must first understand it, and hence optimize systems for it. What we need is a killer application and driver for MPC: Not something that people are already doing today, to which we can add privacy using MPC. But something that no one is willing to do today, but that people would be willing to do using MPC.

– How do we evaluate efficiency of MPC and security of MPC?
In general the panel members agreed that the way efficiency of cryptographic protocols is measured in our field is extremely messy. Everyone uses their own implementations of basic primitives and runs their experiments in very different settings. All this leads to non-refutability of results since no one can rerun the experiments. Also it is very hard to compared results across papers.
Finally the discussion turned to the security models. In crypto we’re often black/white, either secure or insecure. However in the real world it was argued that for a lot of problems it’s often  considered ok to leak some information.

– Implementing MPC protocols is time-consuming. Can we do anything to remedy this?
A lot of people implement their protocols from scratch, including all the primitives they utilize. This is often time consuming and ends up in hacky code only readable to the author. Yehuda advocates the library SCAPI which gives the author access to a lot of primitives which can be used as building blocks for the actual protocol. David states that it’s SCAPI is a valuable tool, “If you are implementing a Yao protocol, you don’t want to spend time implementing OT”.
Finally the importance of implementing your protocol was underlined. A lot of performance issues only become apparent when you implement stuff: there are tons of factors that need to be considered, such as bandwidth, network latency, round complexity, crypto operations and plaintext operations. We’ve reached a time where crypto in many cases isn’t the bottleneck of performance anymore.

photo (1)

Second Session: Garbled Circuits

fleXOR: flexible garbling for XOR gates that beats free-XOR

Mike Rosulek

FleXOR is an optimization of the free XOR approach for garbled circuits. The free XOR approach [KS08] makes it possible to evaluate XOR gates in a garbled circuit for free (without any communication or cryptographic computations). The free XOR approach is compatible with the (small) garbled row reduction which reduces the amount of cipher texts in a non-XOR garbled gate from 4 to 3. However, it is not compatible with the (large) garbled row reduction which reduces the number of cipher texts to 2. Furthermore, the free XOR approach requires a quite strong security assumption (with a circular security flavor). FleXOR on the other hand makes it possible to use a weaker assumption (only “correlation robustness”) and use the large row reduction which only requires 2 cipher texts per gate. The price is that not all XOR gates can be done for free, but in general the average amount of cipher texts for a garbled circuit ends up being less with fleXOR than with free XOR.
The fleXOR construction is based on not having a single global difference in the garbled circuit (which is the case for free XOR), but several differences (still less than the amount of gates). This makes it possible to only need cipher texts for garbled XOR gates sometimes. Exactly when and how many depends on the circuit structure.

Zero-knowledge from garbled circuits and garbled circuits for zero-knowledge

Claudio Orlandi

Claudio shows how to do actively secure zero knowledge proof of non-algebraic statements (which has previously been quite a complex task) using a passively secure version of Yao’s garbled circuits. To see how this is done first notice that zero knowledge is a proper subset of MPC, that is, as a two party protocol with a prover having an element x and a witness w which can be used to verify that x is in a specific language efficiently, and a verifier without any input. Thus the problem can be expressed as a function f_x(w)=true if w is a witness for x and false otherwise. The protocol consists of having the verifier construct a garbled circuit computing f_x which he sends to the prover. The prover is then given his inputs using an actively secure OT protocol. The prover then evaluates the garbled circuit and in the end constructs a commitment to the output (a key representing either true or false) which he sends to the verifier. The verifier then sends the randomness used to construct the garbled circuit to the prover who checks that the verifier has been honest, if so, he opens the commitment and the verifier learns whether or not the prover actually had a witness for x, but nothing more.

Finally Claudio talked about some work in progress: in zero-knowledge the prover knows ALL the input bits and therefore all the values on the internal values of the circuit. Standard Yao gates have oblivious evaluation because the evaluate does not know the values on the internal wires. Therefore for the zero-knowledge case we can use a simpler garbling scheme, that only guarantees soundness (no privacy): in particular in these garbling schemes only 1 ciphertext per gate is needed (2 if one wants free-XOR). Claudio finally noted that the fleXOR technique for the previous talk seems to have straightforward application to garbled circuits without privacy as well.

Efficient garbling form a fixed key blockcipher

Viet Tung Hoang

Viet points out an attack on the free XOR approach when using the garbling scheme from [PSSW09]: It turns out that the keys on both the left and right wire entering a gate can end up being the same, in which case it is possible to learn the global difference of the garbled circuit by XOR’ing two entries of the garbled computation table. This occurs since the structure of the circuit structure can yield an unfortunate symmetry (or if the same key is directly used in both inputs to a garbled gate). With this in mind Viet introduces a way to garble a gate using fixed key AES which does not have this problem but is still highly efficient. The idea is to view the keys as elements in GF2^k and use a constant multiplication of each of the keys to break the symmetry. Questions were asked about how secure AES is in practice when used on a fixed key, and this seems to be one of the questions where the MPC community needs to interact with other crypto sub-communities (in this case, experts in symmetric crypto) to figure out what to do.

Session Three: Applications

Secure Collaborative Statistics in Credit Rating Using Linear Programming

Tomas Toft

A farmer runs into a bank… it’s not the beginning of a joke, but the beginning of the talk by Tomas Toft (Aarhus University). Tomas started his talk with the following motivating examples: Danish dairy farmers need loans to support their companies, and the banks want to assess how well the farmers are doing to find out what is the probability that the farmers will go bankrupt and not pay the loan back. Big banks have many clients, so they have a big sample to compare with. But small banks do not, and they would like to have access to more data to jointly credit-rate farmers. This is a classic MPC scenario, where each bank inputs his database of clients, and the new farmer can input his data and figure out his ranking and nothing else. The algorithmic tool to do this is Data Envelopment Analysis (DEA), and Tomas argues that DEA translates easily to linear programming.

The underlying protocol is SPDZ for additions and multiplications, together with comparison by Lipmaa-Toft (FC’13) that can compare n-bit values with O(log n) multiplications. A Java implementation running on Amazon cloud shows that this computation for typical inputs can be solved in slightly more than 3 minutes (or 5 seconds allowing some extra leakage) after the preprocessing.

Standard Network Flow problems with Secure Multiparty Computation

Abdelrahaman Aly

Abdel (from the center for operations research and econometrics, UC Louvain) started his talk by confessing us that he is an engineer and loves applications, such as benchmarking, problems related to flow on networks, supply chain management, operational research etc.

During his talk Abdel showed how to take algorithms to solve the aforementioned problems, and implement them securely using MPC. To abstract from the underlying protocols, Abdel uses the arithmetic black-box abstraction. Part of the effort to translate standard algorithms in “MPC-safe” algorithms: when running a secure computation, loops with variable number of loops and branching (if… then…) are a big no no. This transformation makes the original algorithms (that already have quite high complexity) slower by a (multiplicative) factor n^2. Experiments shows that the cost of these algorithms in MPC is quite high (however, the implementations are done using the by now obsolete VIFF framework).

MPC in Large Networks (with applications to anonymous broadcast)

Mahdi Zamani

Mahdi motivates his talk using examples from the real world, in particular the growth of modern networks such as BitTorrent, Facebook, Twitter or Bitcoin. How to run MPC with so many parties?

Their setting is N parties (for very big N), with at most N/10 (static, active) corruptions. While it is known that in the setting of honest majority it is possible to construct information-theoretic protocols, Mahdi argues that one can gain in efficiency by incorporating computational aspects in their protocols. In particular, using homomorphic encryption in some steps of the protocol it is possible to reduce the number of rounds and the communication complexity.

Their protocol builds up on many tools, such as quorum based MPC, VSS, threshold FHE, SPDZ etc. The main idea behind the protocol seems to be to assign each gate to a small set of parties, where they can evaluate it and then they send it to the next set of parties. This reduces the communication, and due to the low number of corrupted parties and the quorum construction, one can make sure that each small subset of parties contains many honest parties.

An example application for their protocol is anonymous broadcast (or the dining cryptography problem).


Koki Hamada

This was the last talk of the first day, and by now my jetlag really kicked in, so these notes will not make justice to Koki Hamada (NTT) very interesting work.

MEVAL is a very efficient framework for 3-party secure computation, that tolerates at most 1 passive corruption (same as sugarbeet auction/Sharemind), but they chose to work modulo the Mersenne prime 2^61-1. MEVAL claims high performances (8,7 mips multipliactions, 6.9 seconds for sorting 1 million values) and can be used in the client-server model, where a number of clients share their data among 3 servers who run the computation and return an output. MEVAL implements basic MPC functionalities and statistical functions. They reported on joint experiments, with medial study group, university hospital and statistics bureau.

Under the hood of MEVAL there are many interesting optimizations, including asynchronous processing, pseudorandom secret sharing, and careful choice of the finite field to perform computation on. Finally MEVAL contains a number  optimized protocols for higher level primitices, such as bit decomposition for small values, sorting protocol etc.

WAMPC, Day 1, Morning

A delegation of the Aarhus Crypto Group (Bernardo David, Tore Frederiksen, Claudio Orlandi, Tomas Toft and Roberto Trifiletti) is attending the Workshop on Applied Multi-Party Computation and will attempt to cover this event on the MPCLounge.

We decided that timeliness is more important than perfection, and we therefore apologize for any mistake in these posts!

This is the first of a 4 post series.


Continue reading