Fair enough

Blog about  “On the Classification of Finite Boolean Functions up to Fairness” by Nikolaos Makriyannis.

Written by Carsten Baum

Nikolaos gave a talk about finite boolean functions (think about some finite sets X, Y and the functions mapping from X x Y to {0,1}) and which of those are actually computable by two parties in a fair way. Fair here means that, if at least one party gets the output, then both of them do. There is a classic result by Cleve (C’86) that shows that there is no way how one can implement secure fair coin flipping between two parties, and there have been two recent lines of work:

1. GHKL’08 showed that there exists a protocol such that one is possible to compute a certain set of functions in a fair way. Asharov then showed in A’14 a classification of functions that one can evaluate using the GHKL’08 protocol.
2. Based on C’86, every function that implies secure coin flipping cannot be evaluated in a fair way. ALR’13 gave a classification of functions that imply secure coin flipping.

The talk was about extensions on both lines of work. First, he gave a definition with two properties of a function that can be used to determine whether GHKL can be applied (which are quite technical so I refer to the paper here). Second, he shows that a larger class of functions imply coin flipping. His proof relies on the fact that sampling of random variables under certain conditions implies secure coin flipping, so these instances of sampling must be impossible as well. He then shows that so-called “semi-balanced functions” imply sampling. This extends the result of ALR’13.

How to use bitcoin to design fair protocols

Result by Iddo Bentov, Ranjit Kumaresan
Blog by Samuel Ranellucci

Fairness is the idea that if one player gets the result then all players
get the result.
A classical result from Cleve in 1986, showed that fairness without an honest majority
was impossible for certain functionalities.

To deal with this, we have that instead of guaranteeing such fairness,
we insure that if this occurs the honest players received payment
in lieu of fairness while at the same time the cheaters lose money.
This is achieved via the use of bitcoin.

Bitcoin is often used for lottery, gambling, auctions.
We also want that honest parties never lose coins.

The formal framework used in this work is the Formal framework of
the real-ideal world paradigm and they have straight line simulation.

First they design and implement the Claim or refund functionality.
It is a 2-party computation where the sender deposits coins with an associated
time bound and a circuit. The receiver can claim deposit if he reveals
a satisfying assignment for the circuit, within time t.

To realize general protocols, it works as follows:
First compute a value for the function and secret share the result.
Each player then deposit coins.
The idea is that if a player reveals his shares, he gets the coin back.
If he doesn’t, he must pay a penalty to all other players.

In future works, it would be interesting to reduce the penalty
for secure lottery.

Round-efficient black-box constructions of composable multi-party computation

Result by Susumu Kiyoshima
Blog written by Samuel Ranellucci

The protocols are black-box and composable and allow realization of all
tasks in MPC.

UC secure protocol impossible in the standard model but in Angel based UC
protocol are realizable in the plain model.

In this work, they construct a Angel-UC commitment with log(n^2) rounds
based on semi-hones OT

The old recipe is CCA-Com plus Semi-Honest OT provides MPC angel-UC.

CCA-Com are commitment which are hiding even with a
commitment extraction oracle which can query any other element
then the one that was committed to.

CCA-com in this paper are constructed from OWF by creating
a) good Concurrent extractable commitments
b) non-malleable commitment

A good concurrent commitment is one where an oracle
can only extract a value from a valid commitment.

This is done by making a weakly extractable commitments where extraction fails
with probability 1/2 .
but accepted commitment are invalid with probability less then 1/2.
This weakly extractable commitments when combined with previous approaches
based on cut-and-choose allow construction of good CECOM.

The techniques in this paper could provide useful techniques for further
reducing the complexity of composable MPC.

Client-Server Concurrent Zero Knowledge with Constant Rounds and Guaranteed Complexity

Result by Ran Canetti, Abhishek Jain, Omer Paneth
Blog written by Samuel Ranellucci

Concurrent self-composition is the task of making a protocol
secure when many instances are run in parallel.

The goal of this paper is to generate Concurrent zero-knowledge
in the client-server model.
This work uses bounded concurrent ZK.
In contrast to earlier protocols where the amount of time that
a client has to await is undetermined and unbounded, the amount of time
that a client will have to wait is decided at the start of runtime of the zero-knowledge
protocol. This is based on the earlier wok of Persiano-Visconti potocol.

The main result is a 6 rounds Concurrent ZK which guarantees
communication complexity with O(n^c) where n is the number of concurrent proofs taking place.

Physical Zero-Knowledge Proofs of Physical Properties

Result by Ben Fisch, Daniel Freund, Moni Naor
Blog written by Samuel Ranellucci

The two main goals of this paper are to
Formalize physical protocols for zero-knowledge and to
prove something that cannot be proven digitally.

The paper focus on two tasks: Nuclear disarmament and DNA profile.
The talk focused on DNA profiling.

In this talk, the defendant wishes to prove that his DNA was not the one found at
the crime scene without giving it to police. This protects
the defendant against contamination and against police corruption.

First, a protocol based on the zero-knowledge graph isomorphism protocol was presented.
Essentially by using tamper evident seals and masking tape, first the client
would send a sealed sample, the cop would use one sample
either from the crime scene or the given sample and then send it back.
The defence would then have to determine which sample was taken.
A protocol based on proof of set size relies on weaker assumptions.

When modelling a physical zero-knowledge proof, it is necessary
to go from real world, ideal world and then back to the actual model.

In contrast to standard ZK, physical ZK ideal functionality has no witness.

Secure Multi-Party Computation with Identifiable Abort

Result by Yuval Ishai, Rafail Ostrovsky and Vassilis Zikas
Blog written by Roberto Trifiletti

First talk of the first Secure Computation session was by Vassilis Zikas who was presenting the paper “Secure Multi-Party Computation with Identifiable Abort” with coauthors Yuval Ishai and Rafail Ostrovsky.
As the title says the topic was MPC with identifiable abort (ID-MPC). Vassilis started out explaining the goals of general MPC, namely computing a function on joint input, while guaranteeing that only the output is revealed to each party. The following history of MPC was mapped out:
– In the 1980s the foundations of MPC was explored and established.
– In the 1990s major asymptotic advances was made.
– And in the 21st century implementations and practical MPC saw the light of day.

However Vassilis stressed that most of the research in the area of MPC has ignored the problem of abort. It is folklore that for general MPC it is impossible to withstand active premature abort. This has had the effect that most work on practical MPC only deliver security with abort, meaning the adversary can abort the protocol at any time, in particular when he learns the output and before the honest parties do (no fairness). Many MPC protocols do not even have mechanisms to identify which party caused the abort. Since the honest parties in this case have no way of knowing which party to either exclude or repair the only strategy is to start over from scratch. However the same party can cause an abort repeatedly in each execution which effectively imposes a DoS attack on the computation.

This work aims to address the above scenario by providing MPC protocols with identifiable abort, meaning if a party causes the protocol to abort the rest of the parties learns his identity and can take appropriate action. Clearly this eliminates the DoS attack sketched above.

An earlier result from [IOS12] shows that Information Theoretic (IT) ID-MPC cannot be achieved assuming OT alone. In fact it was shown that any type of pairwise correlated randomness is insufficient. In the computational setting, the GMW protocol (with a twist) [CDN01] is known to deliver ID-MPC. However this protocol is inherently non-black box and uses public key primitives for each gate of the circuit, so it is not suitable for implementation.
The work has two major contributions:
– Construct first IT ID-MPC assuming n-wise correlated randomness. In particular presents a compiler taking a semi-honest IT MPC protocol the in correlate randomness (CR) model as input and outputting a IT secure ID-MPC protocol in the CR model.
– Computationally secure ID-MPC using an OT protocol in a black-box manner. In particular a compiler taking an IT ID-MPC in the CR model as input and outputting a computational ID-MPC scheme.

The first IT compiler can be thought of as the ID-MPC equivalent of the GMW compiler. It makes use of a one-to-many commitment scheme instantiated using IT signatures with distributed verification keys [SHZI02, SS11] and IT ZK proofs which in turn are constructed using the IPS paradigm.

The second compiler makes BB use of the OT protocol to implement the sampling procedure (setup) step of the input IT ID-MPC protocol. Thus it is the input IT ID-MPC protocol that is being run, but the initial setup phase is instantiated using the (computational) BB OT protocol. Thus the resulting protocol is computationally secure.

As final remarks it was highlighted that this work presents the first IT ID-MPC protocol from n-wise correlated randomness, where before it was only known that it was impossible from 2-wise. Open problems include looking for more efficient solutions, implement the current ones and compare, and investigating if indeed n-wise correlation is required, or perhaps n-1-wise correlation could be sufficient.

Amortizing Garbled Circuits and Cut-and-Choose Yao-Based Secure Computation in the Online/Offline and Batch Settings

Two papers by Yan Huang, Jonathan Katz, Vladimir Kolesnikov, Ranjit Kumaresan, Alex J. Malozemoff and Yehuda Lindell, Ben Riva respectively
Blog written by Roberto Trifiletti

The second talk of the last session of CRYPTO14 was presented by Ranjit Kumaresan who presented two merged independent works on how to amortize the construction of garbled circuits for the batch/multiple execution setting. Additionally the work [LR14] consider the online/offline paradigm for secure computation and try to push as much of the computation as possible to the offline phase. Both works are motivated by the fact that practice shows that 99.999% of the cost of implemented protocols is due to the number of garbled circuits, so any reduction in the replication factor will have great performance implications. To get an idea of where the state of the art is Ranjit summarized the last couple of years improvements in the replication factor for achieving security $2^-40$:
– 680 [LP07]
– 128 [LP11]
– 125 [sS11]
– 48 [HKE]
– 40 [Lin13]

It seems inherent that one cannot go under 40 circuits for achieving security $2-40$, at least in a general setting where one garbles entire circuits. However, both works were inspired by the LEGO approach[NO09] which works slightly different than standard garbled circuits approaches. For a brief summary, the LEGO approach first garbles a large number of gates independent of the function to be evaluated and performs cut-and-choose on the gate level. Then at a later stage the wires of the remaining gates are soldered together to form buckets which now compose the desired circuit. For each bucket the output key is taken as the majority. Because of this the LEGO approach only requires a replication factor of O(s/log |C|) for security $2^-s$, meaning LEGO only gets more efficient the more gates you garble.

By restricting the setting to that of multiple execution (or batch setting) both [HKKKM14] and [LR14] are able to benefit from the above LEGO approach. Here the task is to evaluate the same function a number of times (on different inputs), which results in (amortized) much better replication factors than for a single execution. In both works one can think of each of the garbled circuits produced as a garbled gate in the LEGO setting. Thus the constructor garbles a lot of circuits for the same function f and the receiver performs cut-and-choose on these as usual. Next the receiver partitions the remaining garbled circuits into execution sets, where each set can be thought of as a soldered bucket in the LEGO setting. By using the forge-and-loose protocol of [L13], only a single circuit in each set is required to be correct which even further reduces the replication factor. Loosely, the forge-and-loose approach [B13, HKE13, L13] guarantees that if two circuits (on the same input) differ in output, then the receiver can extract the senders input and thereby finish the computation by himself. It is noted that [LR14] goes a little further for concrete efficiency and considers a varying check-fraction p, where usually protocols check 50% of all garbled circuits in cut-and-choose.
For concrete numbers [LR14] achieve for N = 1000 batch executions and for security 2^-40, their construction requires 7059 circuits, where p = 15% needs to be checked in the offline phase, and randomly mapping the remaining circuits into sets of size 6 and evaluate. This yields a replication factor of ~ 7.06 (when amortizing over all 1000 executions) which much surpassed the former 40 in the single execution setting. In the paper of [LR14] there are numerous tables and graphs for concrete security.