Details can be found at:

http://crypto.biu.ac.il/5th-biu-winter-school

To get ready for the school, you can watch the videos of the 1st Winter School on **Secure Computation** here:

http://crypto.2bwebsite.co.il/1st-winter-school

]]>

*Blog written by Rasmus Lauritsen*

In multi-party computation a set of $n$ players wants to compute a function $y=f$ on inputs $x_1, …, x_n$ where each input $x_i$ is private information to player $i$. Security in this setting means that each player essentially learns *Nothing* new other then the result $y$.

In the above definition the meaning of security holds if at least one player is still honest (or acting non-corrupt). In this presentation the notion and meaning of security is extended to include the setting where all parties are corrupted however leaving an auditable transcript of the computation allowing third-party observers to audit the computation afterwards. Naturally the transcript is public information and the above security definition must still hold in the presents of a single honest party.

One setting in particular stands out: A small number of MPC-servers are carrying out a computation on behalf of a (for MPC impractically) larger set of clients (who are stakeholders) providing inputs. In this client/worker setting the Auditable MPC approach suggested in this talk will allow the clients to verify that none of the MPC servers deviated from the protocol and actually did the specified computation from the transcript.

The famous Sugar beet auction taking place in Denmark each year is an example of such a scenario: the farmers are clients and indeed stakeholders inputting data to a small set of MPC servers one of which is maintained by their trade-union.

An example of a concrete construction is presented: In the context of the SPDZ protocol we use a bulletin board where the input-providers will publish Pedersens-commitment to their input values. These commitments allow the same homomorphic operations on commitments as the SPDZ protocol can perform on values. Therefore, with an also public description of the function on the bulletin board any auditor can reply (and thereby audit) the function on the input-commitments to obtain a commitment on the result. Because of the hiding property of the commitment scheme this reveals no extra information. The binding property ensures that if even all MPC servers deviates computing another function they will be caught by an auditor.

A noticeable feature with this construction is that only a small overhead is introduced linear in the inputs. As SPDZ is secure against n-1 parties only if all computation servers are suspected to be corrupt an actual Audit (reply on commitments) has to take place.

]]>

In this blog post, I will write a little bit about Thomas Jakobsen’s talk “Faster Maliciously Secure Two-Party Computation Using the GPU”. For this blog post, I assume that the reader is familiar with Yao’s original Garbled Circuit (GC) protocol for secure two-party computation (2PC).

Thomas started with a description of techniques how to make GC protocols actively secure. The approach used in their paper was cut and choose, meaning that party A creates not one, but multiple garbled versions of the same circuit. About half of those garbled circuits will be chosen by the other party B to be completely opened up (to check that they were constructed correctly). If they were constructed correctly, then with a certain probability a certain number of the other circuits must be correct as well. This now introduces a number of other problems: (1) A might send inconsistent inputs of himself (2) A might send incorrect input wires for B. The authors deal with the second problem using an approach by [LP07], where the authors transform the input of B such that A learning a few bits of Bs new input does not reveal information about the actual input he had. To tackle problem (1), the authors use an approach by [sS13,FN13] which consists of computing an universal hash function on As inputs and B checking that these values are consistent for all circuits sent by A.

One could now evaluate all evaluation circuits and take the majority of the outputs. This yields a quite substantial overhead, but one can do better using the forge-and-lose technique due to [B13, HKE13, L13]: If one of the circuits does not evaluate to the same output value, then B can reconstruct the inputs of A (which makes him able to evaluate the computed circuit in the plain). As the main contribution of their work, the authors achieve this property without an additional MPC as in [L13] or trapdoor commitments as in [B13]. Instead, they use the free-XOR-optimization technique which reveals the input of A if ever two different output wires of the same gate are known to B. For every output gate, one can now construct a polynomial that goes through all the 0-wires and for the 1-wires respectively. By appropriately choosing the degree, this allows to compute all output values if at least one circuit is correct (which is true due to the cut and choose) and one is wrong. In the protocol, one now has to make sure that indeed all wires lie on a polynomial, but this was outside of the scope of Thomas’ talk.

He finally presented benchmark numbers for their implementation (which makes heavy use of an available GPU). According to the benchmark results, their work is superior to all preceding approaches in terms of runtime – even though potentially not comparable due to the use of the GPU and the fact that security only holds in the Random Oracle Model.

]]>

The motivations for considering adaptive adversaries are many, first of all because it yields stronger security and leakage resilience. In the more practical setting it yields applications in cloud computing.

To get the first result Muthu uses simulatable public key encryption and UC puzzles. He furthermore uses non-mallable commitments in his construction.

To achieve the second result Muthu overcomes an impossibility result by tweaking the definitions used a little bit.

The details and tricks are plentiful and not that easily explained in a blog post, so I direct the interested reader to the paper.

]]>

Rasmus and his co-authors continue work on the MiniMac protocol, and in particular introduce the first implementation of the protocol. More specifically, they introduce concrete choices of parameters and error correcting code along with several optimizations increasing efficiency both in theory and practice. One such specific optimization involves realizing SIMD AND operations of individual bits in GF2^8 using Beaver style triplets. Another optimization makes the workload more symmetric when only two parties are running the protocol. Besides the more theoretical optimizations, Rasmus also explains that they have realized several implementation wise optimizations such as using the Fast Fourier Transform for encoding.

In order to make a fair benchmark of their optimizations of MiniMac the authors have implemented the protocol both with and without optimizations along with the protocol known as TinyLEGO [NNOB12]. What their implementation show is that with no optimizations TinyOT is a bit faster than MiniMac, but with the optimizations MiniMac becomes close to two order of magnitude faster than TinyOT (in the amortized sense) when doing the “standard” MPC benchmark of oblivious AES encryption. In particular down to a 4 ms online phase!

]]>

Currently Jason and his co-authors have 190 papers in their database and their GUI is called SysSC-UI, and is open source. As future work Jason hopes that the project can move towards a community driven model where authors themselves might submit their protocols. Furthermore, finding other and easier ways to visualize protocols would also be desirable as future work.

]]>

The protocol combines several techniques and in particular verifiable secret sharing. Specifically a type of secret sharing called Kudzu sharing is used to for dispute handling in case of corrupted parities. Verifiability comes from double sharings, which is not trivial to achieve with the required complexity. The paper contains the details of the protocol along with the tricks used.

]]>

*Written by Carsten Baum*

Nikolaos gave a talk about finite boolean functions (think about some finite sets X, Y and the functions mapping from X x Y to {0,1}) and which of those are actually computable by two parties in a fair way. Fair here means that, if at least one party gets the output, then both of them do. There is a classic result by Cleve (C’86) that shows that there is no way how one can implement secure fair coin flipping between two parties, and there have been two recent lines of work:

1. GHKL’08 showed that there exists a protocol such that one is possible to compute a certain set of functions in a fair way. Asharov then showed in A’14 a classification of functions that one can evaluate using the GHKL’08 protocol.

2. Based on C’86, every function that implies secure coin flipping cannot be evaluated in a fair way. ALR’13 gave a classification of functions that imply secure coin flipping.

The talk was about extensions on both lines of work. First, he gave a definition with two properties of a function that can be used to determine whether GHKL can be applied (which are quite technical so I refer to the paper here). Second, he shows that a larger class of functions imply coin flipping. His proof relies on the fact that sampling of random variables under certain conditions implies secure coin flipping, so these instances of sampling must be impossible as well. He then shows that so-called “semi-balanced functions” imply sampling. This extends the result of ALR’13.

]]>

Fairness is the idea that if one player gets the result then all players

get the result.

A classical result from Cleve in 1986, showed that fairness without an honest majority

was impossible for certain functionalities.

To deal with this, we have that instead of guaranteeing such fairness,

we insure that if this occurs the honest players received payment

in lieu of fairness while at the same time the cheaters lose money.

This is achieved via the use of bitcoin.

Bitcoin is often used for lottery, gambling, auctions.

We also want that honest parties never lose coins.

The formal framework used in this work is the Formal framework of

the real-ideal world paradigm and they have straight line simulation.

First they design and implement the Claim or refund functionality.

It is a 2-party computation where the sender deposits coins with an associated

time bound and a circuit. The receiver can claim deposit if he reveals

a satisfying assignment for the circuit, within time t.

To realize general protocols, it works as follows:

First compute a value for the function and secret share the result.

Each player then deposit coins.

The idea is that if a player reveals his shares, he gets the coin back.

If he doesn’t, he must pay a penalty to all other players.

In future works, it would be interesting to reduce the penalty

for secure lottery.

]]>

Blog written by Samuel Ranellucci

The protocols are black-box and composable and allow realization of all

tasks in MPC.

UC secure protocol impossible in the standard model but in Angel based UC

protocol are realizable in the plain model.

In this work, they construct a Angel-UC commitment with log(n^2) rounds

based on semi-hones OT

The old recipe is CCA-Com plus Semi-Honest OT provides MPC angel-UC.

CCA-Com are commitment which are hiding even with a

commitment extraction oracle which can query any other element

then the one that was committed to.

CCA-com in this paper are constructed from OWF by creating

a) good Concurrent extractable commitments

b) non-malleable commitment

A good concurrent commitment is one where an oracle

can only extract a value from a valid commitment.

This is done by making a weakly extractable commitments where extraction fails

with probability 1/2 .

but accepted commitment are invalid with probability less then 1/2.

This weakly extractable commitments when combined with previous approaches

based on cut-and-choose allow construction of good CECOM.

The techniques in this paper could provide useful techniques for further

reducing the complexity of composable MPC.

]]>