Publicly Auditable Secure Multiparty Computation

Presented at SCN 2014, Amalfi, Italy by Carsten Baum, Aarhus University

Blog written by Rasmus Lauritsen

In multi-party computation a set of $n$ players wants to compute a function $y=f$ on inputs $x_1, …, x_n$ where each input $x_i$ is private information to player $i$. Security in this setting means that each player essentially learns *Nothing* new other then the result $y$.

In the above definition the meaning of security holds if at least one player is still honest (or acting non-corrupt). In this presentation the notion and meaning of security is extended to include the setting where all parties are corrupted however leaving an auditable transcript of the computation allowing third-party observers to audit the computation afterwards. Naturally the transcript is public information and the above security definition must still hold in the presents of a single honest party.

One setting in particular stands out: A small number of MPC-servers are carrying out a computation on behalf of a (for MPC impractically) larger set of clients (who are stakeholders) providing inputs. In this client/worker setting the Auditable MPC approach suggested in this talk will allow the clients to verify that none of the MPC servers deviated from the protocol and actually did the specified computation from the transcript.

The famous Sugar beet auction taking place in Denmark each year is an example of such a scenario: the farmers are clients and indeed stakeholders inputting data to a small set of MPC servers one of which is maintained by their trade-union.

An example of a concrete construction is presented: In the context of the SPDZ protocol we use a bulletin board where the input-providers will publish Pedersens-commitment to their input values. These commitments allow the same homomorphic operations on commitments as the SPDZ protocol can perform on values. Therefore, with an also public description of the function on the bulletin board any auditor can reply (and thereby audit) the function on the input-commitments to obtain a commitment on the result. Because of the hiding property of the commitment scheme this reveals no extra information. The binding property ensures that if even all MPC servers deviates computing another function they will be caught by an auditor.

A noticeable feature with this construction is that only a small overhead is introduced linear in the inputs. As SPDZ is secure against n-1 parties only if all computation servers are suspected to be corrupt an actual Audit (reply on commitments) has to take place.

Faster Maliciously Secure Two-Party Computation Using the GPU

Blog written by Carsten Baum

In this blog post, I will write a little bit about Thomas Jakobsen’s talk “Faster Maliciously Secure Two-Party Computation Using the GPU”. For this blog post, I assume that the reader is familiar with Yao’s original Garbled Circuit (GC) protocol for secure two-party computation (2PC).
Thomas started with a description of techniques how to make GC protocols actively secure. The approach used in their paper was cut and choose, meaning that party A creates not one, but multiple garbled versions of the same circuit. About half of those garbled circuits will be chosen by the other party B to be completely opened up (to check that they were constructed correctly). If they were constructed correctly, then with a certain probability a certain number of the other circuits must be correct as well. This now introduces a number of other problems: (1) A might send inconsistent inputs of himself (2) A might send incorrect input wires for B. The authors deal with the second problem using an approach by [LP07], where the authors transform the input of B such that A learning a few bits of Bs new input does not reveal information about the actual input he had. To tackle problem (1), the authors use an approach by [sS13,FN13] which consists of computing an universal hash function on As inputs and B checking that these values are consistent for all circuits sent by A.
One could now evaluate all evaluation circuits and take the majority of the outputs. This yields a quite substantial overhead, but one can do better using the forge-and-lose technique due to [B13, HKE13, L13]: If one of the circuits does not evaluate to the same output value, then B can reconstruct the inputs of A (which makes him able to evaluate the computed circuit in the plain). As the main contribution of their work, the authors achieve this property without an additional MPC as in [L13] or trapdoor commitments as in [B13]. Instead, they use the free-XOR-optimization technique which reveals the input of A if ever two different output wires of the same gate are known to B. For every output gate, one can now construct a polynomial that goes through all the 0-wires and for the 1-wires respectively. By appropriately choosing the degree, this allows to compute all output values if at least one circuit is correct (which is true due to the cut and choose) and one is wrong. In the protocol, one now has to make sure that indeed all wires lie on a polynomial, but this was outside of the scope of Thomas’ talk.
He finally presented benchmark numbers for their implementation (which makes heavy use of an available GPU). According to the benchmark results, their work is superior to all preceding approaches in terms of runtime – even though potentially not comparable due to the use of the GPU and the fact that security only holds in the Random Oracle Model.

Adapt, adapt, adapt

Muthuramakrishnan gives a talk “On Adaptively Secure Protocols”. He introduces two main results; first a compiler which can transform a semi-honest and statically secure MPC protocol to an adaptively UC-secure protocol, with only a constant increase in rounds in its execution and minimal setup assumptions (such as a Common Reference String). The second result is the first concurrent non-malleable zero knowledge (CNMZK) protocol secure in the fully adaptive setting. As an artifact of the second result Muthu also provides a compiler which takes any semi-honest secure protocols as input and outputs a fully concurrently secure protocol under polytime assumptions in the Angel-Based UC-framework.
The motivations for considering adaptive adversaries are many, first of all because it yields stronger security and leakage resilience. In the more practical setting it yields applications in cloud computing.
To get the first result Muthu uses simulatable public key encryption and UC puzzles. He furthermore uses non-mallable commitments in his construction.
To achieve the second result Muthu overcomes an impossibility result by tweaking the definitions used a little bit.
The details and tricks are plentiful and not that easily explained in a blog post, so I direct the interested reader to the paper.

MiniTrix for MiniMacs

Following the talk on categorizing MPC protocols is the author formally known as Rasmus Lauritsen (currently known as Rasmus Zakarias) giving “An Empirical Study and Some Improvements of the MiniMac Protocol for Secure Computation”, a joint work with Ivan Damgård and Tomas Toft. The talk is about a continuation of work on the protocol known as MiniMac, introduced by Ivan Damgård and Sarah Zakarias in 2012. Quickly explained MiniMac is a maliciously secure MPC protocol for dishonest majority working where the plain values are elements of small fields (in particular characteristic 2, and thus Boolean circuits). The protocol is in the preprocessing model (meaning there is no need to know the input or the function specification during the offline phase) and has some similarity to the SPDZ protocol. More specifically elements are additively shared over a finite field and Beaver triples are used for multiplication. However, using additive shares will only yield semi-honest security, thus MACs are added to the shares. Unfortunately, this will only be secure if the MACs are from a large field, which is too inefficient. The solution is instead to concatenate many elements in a Same Instruction, Multiple Data (SIMD) manner and use an error correcting code on such a vector and then MAC the codeword.
Rasmus and his co-authors continue work on the MiniMac protocol, and in particular introduce the first implementation of the protocol. More specifically, they introduce concrete choices of parameters and error correcting code along with several optimizations increasing efficiency both in theory and practice. One such specific optimization involves realizing SIMD AND operations of individual bits in GF2^8 using Beaver style triplets. Another optimization makes the workload more symmetric when only two parties are running the protocol. Besides the more theoretical optimizations, Rasmus also explains that they have realized several implementation wise optimizations such as using the Fast Fourier Transform for encoding.
In order to make a fair benchmark of their optimizations of MiniMac the authors have implemented the protocol both with and without optimizations along with the protocol known as TinyLEGO [NNOB12]. What their implementation show is that with no optimizations TinyOT is a bit faster than MiniMac, but with the optimizations MiniMac becomes close to two order of magnitude faster than TinyOT (in the amortized sense) when doing the “standard” MPC benchmark of oblivious AES encryption. In particular down to a 4 ms online phase!

Categorizing MPC

It is the second day of SCN, and it is time for the second session on MPC. Jason Perry is in front of the projector to talk about “Systematizing Secure Computation for Research and Decision Support”, a joint work with Debayan Gupta, Joan Feigenbaum and Rebecca Wright. More specifically the talk is a description of a new software project to categorize MPC protocols. The software includes a database compiled by the authors of different MPC protocols previously published, along with a GUI tool to make the database accessible for users. More specifically, all protocols are projected onto around 22 discrete axis, describing the different features of MPC protocols, going form weak to strong. For example, amount of corruptions; the GUI has a bar where you can adjust if you just need a protocol secure against corruption of one third of the parties, less than half, or everyone except one. The same goes for maliciousness; you can adjust whether you need a protocol that is just secure against semi-honest corruptions, covert corruptions or full-blown malicious corruptions. Unfortunately, not all aspects of protocols are easily projected onto an axis, for example security assumptions. This is solved in different ways, for the security assumptions these are handled by two axis; one binary choosing wether you want general assumptions, or if it ok to assume specific assumptions. The other is a “level indicator”, going from “none” to LWE and company, through one-way functions and trapdoor permutations. Needless to say, the categorization of MPC protocols if far from easy, and perhaps it is even harder to decipher all papers to uncovers the gritty details in order to project the protocols onto the axis of the database. This challenge is one of the reasons the authors decided to undertake the work: It is a very complex and time-consuming task to find the MPC protocols that actually fit a given setting. Their hope is that their work will make this easier and in particular increase the adoption of MPC in the “real world”. Furthermore, their project also supports impossibility results, so should you desire a protocol with features that are theoretically impossible to achieve, the software will tell you so, and thus save you from roaming the Internet looking for something that can never be found.
Currently Jason and his co-authors have 190 papers in their database and their GUI is called SysSC-UI, and is open source. As future work Jason hopes that the project can move towards a community driven model where authors themselves might submit their protocols. Furthermore, finding other and easier ways to visualize protocols would also be desirable as future work.

Communication-Efficient MPC for General Adversary Structures

It was a warm day in Amalfi, Italy. It was also the first day of SCN 2014 and a cold breeze from the air conditioning passed through the room and it was time for the first session of the conference about MPC. Joshua Lampkins was in front of the projector to talk about “Communication-Efficient MPC for General Adversary Structures”, a joint work with Rafail Ostrovsky. Joshua described a new protocol for MPC for general adversary structures. Before going into detail be aware that a general adversary structure considers a set of adversaries in a protocol which is not necessarily less than some threshold. That is, usually we consider either honest majority or honest minority and don’t care which specific players are corrupted. However, in some cases certain sets of players are very unlikely to be corrupted together and collude. An example could be if a protocol is run by several countries, then allied countries might be more likely to collude in corruption, than countries who are at war with each other. How to best describe which parties might be corrupted together, and thus which kind of adversary corruption structure a protocol allows is not trivial. In general two approaches exists; as a Monotone Span Program (MSP) or a tree structure. In his talk Joshua considers a special type of adversary structure which is called Q2. Q2 is the structure where no two possible sets of corruptions cover the entire set of players. More specifically Joshua describes the first unconditionally maliciously and adaptively secure MPC protocol that achieves linear communication complexity in the size of the MSP representing the adversary structure. This is an improvement over previous protocols which are cubic in the size of the MSP.
The protocol combines several techniques and in particular verifiable secret sharing. Specifically a type of secret sharing called Kudzu sharing is used to for dispute handling in case of corrupted parities. Verifiability comes from double sharings, which is not trivial to achieve with the required complexity. The paper contains the details of the protocol along with the tricks used.

Fair enough

Blog about  “On the Classification of Finite Boolean Functions up to Fairness” by Nikolaos Makriyannis.

Written by Carsten Baum

Nikolaos gave a talk about finite boolean functions (think about some finite sets X, Y and the functions mapping from X x Y to {0,1}) and which of those are actually computable by two parties in a fair way. Fair here means that, if at least one party gets the output, then both of them do. There is a classic result by Cleve (C’86) that shows that there is no way how one can implement secure fair coin flipping between two parties, and there have been two recent lines of work:

1. GHKL’08 showed that there exists a protocol such that one is possible to compute a certain set of functions in a fair way. Asharov then showed in A’14 a classification of functions that one can evaluate using the GHKL’08 protocol.
2. Based on C’86, every function that implies secure coin flipping cannot be evaluated in a fair way. ALR’13 gave a classification of functions that imply secure coin flipping.

The talk was about extensions on both lines of work. First, he gave a definition with two properties of a function that can be used to determine whether GHKL can be applied (which are quite technical so I refer to the paper here). Second, he shows that a larger class of functions imply coin flipping. His proof relies on the fact that sampling of random variables under certain conditions implies secure coin flipping, so these instances of sampling must be impossible as well. He then shows that so-called “semi-balanced functions” imply sampling. This extends the result of ALR’13.