Publications
in Reverse Chronological Order

Itay Berman and Iftach Haitner and
Aris Tentes
Coin Flipping of Any Constant Bias Implies OneWay
Functions
[Abstract]
[Slides]
STOC 2014
[PDF]
We show that the existence of a coinflipping protocol safe against any nontrivial constant bias (e.g., .499), implies the existence of one
way functions. This improves upon a recent result of Haitner and Omri [FOCS ’11], who proved this implication for protocols with bias √ 2−^{1}⁄_{2} ~.207. Unlike the result of Haitner and Omri, our result holds also for
weak coinflipping protocols.

Iftach Haitner and Eliad Tsfadia An
AlmostOptimally Fair ThreeParty CoinFlipping Protocol [Abstract]
[Slides]
STOC 2014
[PDF]
In a multiparty, fair coinflipping protocol, the parties output a common
(close to) unbiased bit, even when some corrupted parties try to bias the
output. Cleve [STOC 1986] has shown that in the
case of dishonest majority (i.e., at least half of the parties can be
corrupted), the corrupted parties can bias the honest parties common output bit
by Ω(1/m), in any
mround, coinflipping protocol. For more than two decades,
however, the best known coinflipping protocols had bias
Θ(t/√ m) against
dishonest majority, where t is the number of corrupted parties. This was changed
by a recent breakthrough result of Moran et al. [TCC 2009], who constructed an
mround, twoparty coinflipping protocol with optimal bias
Θ(1/m). Where in a
subsequent work, Beimel et al. [Crypto 2010] extended this result for the
multiparty case, in which less than
^{2}⁄_{3} of the parties can be corrupted. More
specifically, for any t <
^{2}⁄_{3}
· k, they presented an mround,
kparty protocol,
with bias Θ( 2^{2t−k}/m) against (up to) t corrupted parties. Still for the case
^{2}⁄_{3} (or more) corrupted parties, the best known protocol had bias
Θ(t/√ m). In
particular, this was the state of affairs for the natural threeparty case
(against two corrupted parties).
We make a step towards eliminating the above gap, presenting an
mround, threeparty, coinflipping protocol, with bias
O(log 2 m)/m. Our
approach (which we also apply for the twoparty case), does not follow the
“threshold round” paradigm used in the work of Moran et al. and Beimel et al.,
but rather is a takeoff of the majority protocol of Cleve (used to obtain the
Θ(t/√ m )bias protocol, mentioned above).

Iftach Haitner,
Jonathan J.
Hoch,
Omer
Reingold
and Gil Segev
Finding Collisions in Interactive Protocols — Tight
Lower Bounds on the Round Complexity and Communication
Complexity of Statistically Hiding Commitments [Abstract]
Manuscript 2013
[PDF]
We study the round complexity and communication complexity of various cryptographic protocols. We give tight lower bounds on the round complexity and
communication complexity of any fully blackbox reduction of a statistically
hiding commitment scheme from oneway permutations, and from trapdoor permutations. As a corollary, we derive similar
tight lower bounds for several other cryptographic protocols, such as
singleserver private information retrieval, interactive hashing, and oblivious
transfer that guarantees statistical security for one of the parties. Our techniques extend the collisionfinding oracle due to Simon (EUROCRYPT ’98) to the setting of
interactive protocols, and the reconstruction paradigm of Gennaro and Trevisan (FOCS ’00).

Itay Berman and Iftach Haitner and Ilan Komargodski
and Moni Naor
Hardness Preserving Reductions via Cuckoo Hashing
[Abstract]
TCC 2013
[PDF]
Draft of full version
[PDF]
A common method for increasing the usability and uplifting the security of
pseudorandom function families (PRFs) is to “hash” the inputs into a smaller
domain before applying the PRF.
This approach, known as “Levin’s trick”, is used to achieve “PRF domain
extension” (using a short, e.g., fixed, input length PRF to get a
variablelength PRF), and more recently to transform
nonadaptive PRFs to adaptive ones. Such reductions, however, are vulnerable to
a “birthday attack”: after √U queries to the resulting
PRF, where U being the hash function range, a
collision (i.e., two distinct inputs have the same hash value) happens with high
probability. As a consequence, the resulting PRF is insecure against an attacker
making this number of queries.
In this work we show how to go beyond the birthday attack barrier, by replacing
the above simple hashing approach with a variant of cuckoo hashing
— a hashing paradigm typically used
for resolving hash collisions in a table, by using two hash functions and two
tables and cleverly assigning each element into one of the two tables. We use
this approach to obtain:
(i) A domain extension method that requires just two calls to
the original PRF, can withstand as many queries as the original domain size and
has a distinguishing probability that is exponentially small in the non
cryptographic work.
(ii) A securitypreserving reduction from nonadaptive to
adaptive PRFs.

Iftach Haitner and
Eran Omri
and Hila Zarosim
Limits on the Usefulness of
Random Oracles [Abstract]
TCC 2013
[PDF] Draft of full version
[PDF]
In the random oracle model, parties are given oracle access
to a random function (i.e., a uniformly chosen function from the set of all
functions), and are assumed to have unbounded computational power (though they
can only make a bounded number of oracle queries). This model provides powerful
properties that allow proving the security of many protocols, even such that
cannot be proved secure in the standard model (under any hardness assumption).
The random oracle model is also used for showing that a given cryptographic
primitive cannot be used in a blackbox way to construct another primitive; in
their seminal work, Impagliazzo and Rudich [STOC ’89]
showed that no keyagreement protocol exists in the random oracle model,
yielding that keyagreement cannot be blackbox reduced to oneway functions.
Their work has a long line of followup works (Simon
[EC ’98], Gertner et al. [STOC ’00] and
Gennaro et al. [SICOMP ’05], to name a few), showing that given oracle
access to a certain type of function family (e.g., the family that “implements”
publickey encryption) is not sufficient for building a given cryptographic
primitive (e.g., oblivious transfer). Yet, the following question remained open:
What is the exact power of the random oracle model?
We make progress towards answering the above question, showing that,
essentially, any no private input, semihonest twoparty functionality that can
be securely implemented in the random oracle model, can be securely implemented
information theoretically (where parties are assumed to be all powerful, and no
oracle is given). We further generalize the above result to function families
that provide some natural combinatorial property.
Our result immediately yields that that the only noinput functionalities that
can be securely realized in the random oracle model (in the sense of secure
function evaluation), are the trivial ones (ones that can be securely realized
information theoretically). In addition, we use the recent information theoretic
impossibility result of McGregor et al. [FOCS ’10],
to show the existence of functionalities (e.g., inner product) that cannot be
computed both accurately and in a differentially private manner in the random
oracle model; yielding that protocols for computing these functionalities cannot
be blackbox reduced to the existence of oneway functions.

Itay Berman
and Iftach Haitner
From NonAdaptive to Adaptive Pseudorandom Functions [Abstract]
Journal of Cryptology 2013
[link]
TCC 2012
[PDF]
Draft of full version
[PDF]
Unlike the standard notion of pseudorandom functions (PRF), a
nonadaptive PRF is only required to be indistinguishable from a random
function in the eyes of a nonadaptive distinguisher (i.e., one
that prepares its oracle calls in advance). A recent line of research has
studied the possibility of a direct construction of adaptive
PRFs from nonadaptive ones, where direct means that the constructed adaptive
PRF uses only few (ideally, constant number of) calls to the underlying
nonadaptive PRF. Unfortunately, this study has only yielded negative results,
showing that ``natural" such constructions are unlikely to exist (e.g.,
Myers [EUROCRYPT '04], Pietrzak [CRYPTO '05, EUROCRYPT '06]).
We give an affirmative answer to the above question, presenting a direct
construction of adaptive PRFs from nonadaptive ones. The suggested construction is
extremely simple, a composition of the nonadaptive PRF with an appropriate
pairwise independent hash function.

Iftach Haitner
and Eran Omri
Coin Flipping with Constant Bias Implies OneWay
Functions [Abstract]
[Slides]
SIAM
Journal of Computing 2013
[link]
FOCS 2011
[PDF] Draft of full version
[PDF]
It is well known (c.f., Impagliazzo and Luby [FOCS '89]) that the existence of
almost all ``interesting" cryptographic applications, i.e., ones that cannot
hold information theoretically, implies oneway functions. An important exception where the above implication is not
known, however, is the case of coinflipping protocols. Such protocols allow
honest parties to mutually flip an unbiased coin, while guaranteeing that even a cheating (efficient) party cannot bias the
output of the protocol by much. While Impagliazzo and Luby proved that
coinflipping protocols that are safe against negligible bias do imply oneway functions, and, very recently, Maji,
Prabhakaran, and Sahai [FOCS '10] proved the same for constantround protocols
(with any nontrivial bias). For the general case, however, no such implication was known.
We make progress towards answering the above fundamental question, showing that
(strong) coinflipping protocols safe against a constant bias (concretely,
\frac{\sqrt2 1}2  o(1)) imply oneway functions.

Yevgeniy Dodis,
Iftach Haitner
and
Aris Tentes
On the Instantiability of
HashandSign RSA Signatures [Abstract]
[Slides]
TCC 2012 [PDF]
Draft of full version [PDF]
The hashandsign RSA signature is one of the most elegant and well known
signatures schemes, extensively used in a wide variety of cryptographic
applications. Unfortunately, the only existing analysis of this popular
signature scheme is in the random oracle model, where the resulting idealized
signature is known as the RSA Full Domain Hash signature scheme
(RSAFDH). In fact, prior work has shown several ``uninstantiability'' results
for various abstractions of RSAFDH, where the RSA function was replaced by a
family of trapdoor random permutations, or the hash function instantiating the
random oracle could not be keyed. These abstractions, however, do not allow the
reduction and the hash function instantiation to use the algebraic properties of
RSA function, such as the multiplicative group structure of Z_{n}*. In contrast,
the multiplicative property of the RSA function is critically used in many
standard model analyses of various RSAbased schemes.
Motivated by closing this gap, we consider the setting where the RSA
function representation is generic (i.e., blackbox) but
multiplicative, whereas the hash function itself is in the standard
model, and can be keyed and exploit the multiplicative properties of the RSA
function.
This setting abstracts all known techniques for designing provably secure RSAbased signatures in the standard model, and aims to address the main
limitations of prior uninstantiability results.
Unfortunately, we show that it is still impossible to reduce the security of
RSAFDH to any natural assumption even in our model. Thus, our result suggests
that in order to prove the security of a given instantiation of RSAFDH, one should use
a nonblack box security proof, or use specific properties of the RSA group
that are not captured by its multiplicative structure alone.
We complement our negative result with a positive result, showing that
the RSAFDH signatures can be proven secure under the standard RSA assumption, provided that the number
of signing queries is apriori bounded.

It is well known that secure computation without an honest majority requires computational assumptions.
An interesting question that therefore arises relates to the way such computational assumptions are used. Specifically, can the secure protocol use the underlying primitive (e.g., a oneway trapdoor permutation) in a blackbox way, treating it as an oracle, or must it be nonblackbox (by referring to the code that computes the primitive)?
Despite the fact that many general constructions of cryptographic schemes refer to the underlying primitive in a blackbox way only, there are some constructions that are inherently nonblackbox. Indeed, all known constructions of protocols for general secure computation that are secure in the presence of a malicious adversary and without an honest majority use the underlying primitive in a nonblackbox way (requiring to prove in zeroknowledge statements that relate to the primitive).
In this paper, we study whether such nonblackbox use is essential. We answer this question in the negative. Concretely, we present a fully blackbox reduction from oblivious transfer with security against malicious parties to oblivious transfer with security against semihonest parties. As a corollary, we get the first constructions of general multiparty protocols (with security against malicious adversaries and without an honest majority) which only make a blackbox use of semihonest oblivious transfer,
or alternatively a blackbox use of lowerlevel primitives such as enhanced trapdoor permutations or homomorphic encryption.

Iftach
Haitner,
Omer
Reingold, and
Salil Vadhan
Efficiency Improvements in Constructing Pseudorandom
Generators from Oneway Functions [Abstract]
SIAM
Journal of Computing 2013
[link]
Annual ACM Symposium on Theory of Computing (STOC), 2010
[PDF] Draft of full version
[PDF]
We give a new construction of pseudorandom generators from any oneway
function. The construction achieves better parameters and is simpler than that
given in the seminal work of Håstad, Impagliazzo, Levin and Luby [SICOMP '99].
The key to our construction is a new notion of nextblock pseudoentropy,
which is inspired by the notion of ``inaccessible entropy'' recently introduced
in [Haitner, Reingold, Vadhan and Wee STOC '09]. An additional advantage over
all previous constructions is that our pseudorandom generators are highly
parallelizable and invoke the oneway function in a nonadaptive manner. Using
[Applebaum, Ishai and Kushilevitz SICOMP '06], this implies the existence of
pseudorandom generators in NC^{0} based on the existence of oneway functions in
NC^{1}.

Boaz Barak,
Iftach Haitner,
Dennis Hofheinz and
Yuval Ishai
Bounded KeyDependent Message Security
[Abstract]
Advances in Cryptology  Eurocrypt,
2010
[PDF]
Draft of full version
[PDF]
We construct the first publickey encryption scheme that is proven secure (in
the standard model,
under standard assumptions) even when the attacker gets access to encryptions of
arbitrary
efficient functions of the secret key. Specifically, under either the DDH or LWE
assumption, for
every polynomials L and N
we obtain a publickey encryption scheme that resists keydependent
message (KDM) attacks for up to N(k) public keys and
functions circuit size up to
L(k), where k denotes
the size of the secret key. We call such a scheme bounded KDM
secure. Moreover, we show that our scheme suffices for one of the
important applications of KDM
security: ability to securely instantiate symbolic protocols with axiomatic
proofs of security.
We also observe that any fully homomorphic encryption scheme which additionally
enjoys circular
security and circuit privacy is fully KDM secure in the sense
that the encryption and
decryption algorithms can be independent of the polynomials
L and N as above. Thus, the recent
fully homomorphic encryption scheme of Gentry (STOC 2009) is fully KDM secure
under certain
nonstandard hardness assumptions.
Previous works obtained either full KDM security in the random oracle model
(Black et al., SAC 2002) or security with respect to a very restricted class of
functions (e.g., clique/circular security and affine functions, Boneh et al.,
CRYPTO 2008,
and Applebaum et al., CRYPTO 2009).
Our main result is based on a combination of the circularsecure encryption
scheme of either
Boneh et al. or Applebaum et al. with Yao's garbled circuit
construction.
Finally, we extend the impossibility result of Haitner and Holenstein (TCC
2009), showing that it
is impossible to prove KDM security against a family of query functions that
contains exponentially
hard pseudorandom functions, using only blackbox access to the
query function and the
adversary attacking the scheme. This proves that the nonblackbox usage of the
query function in
our proof of security makes to the KDM query function is inherent.

Iftach Haitner,
Thomas Holenstein,
Omer Reingold,
Salil Vadhan and
Hoeteck Wee
Universal OneWay Hash Functions via
Inaccessible Entropy
[Abstract]
Advances in Cryptology  Eurocrypt,
2010 [PDF]
Draft of full version
[PDF]
This paper revisits the construction of Universally OneWay Hash Functions
(UOWHFs) from any oneway function due to Rompel (STOC 1990). We give a simpler
construction of UOWHFs which also obtains better efficiency and security. The
construction exploits a strong connection to the recently introduced notion of
inaccessible entropy (Haitner et al. STOC 2009). With this
perspective, we observe that a small tweak of any oneway function
f is already a weak form of a UOWHF: Consider
F(x,i) that outputs the ibit
long prefix of f(x). If F
were a UOWHF then given a random x and
i it would be hard to come up with
x' ≠ x such that F(x,i)=F(x',i).
While this may not be the case, we show (rather easily) that it is hard to
sample x' with almost full entropy among all the
possible such values of x'. The rest of our
construction simply amplifies and exploits this basic property.
With this and other recent works we have that the constructions of three
fundamental cryptographic primitives (Pseudorandom Generators, Statistically
Hiding Commitments and UOWHFs) out of oneway functions are to a large extent
unified. In particular, all three constructions rely on and manipulate
computational notions of entropy in similar ways. Pseudorandom Generators rely
on the wellestablished notion of pseudoentropy, whereas Statistically Hiding
Commitments and UOWHFs rely on the newer notion of inaccessible entropy.

Iftach Haitner,
Mohammad
Mahmoody and
David Xiao A New Sampling Protocol and Applications to
Basing Cryptographic Primitives on the Hardness of NP
[Abstract]
IEEE Conference on Computational Complexity
(CCC), 2010
[PDF]
Draft of full version
[PDF]
We investigate the question of what languages can be decided efficiently with
the help of a
recursive collisionfinding oracle. Such an oracle can be used to break
collisionresistant hash
functions or, more generally, statistically hiding commitments. The oracle we
consider Sam_{d,}
where d is the recursion depth, is based on the identicallynamed oracle defined
in the work of
Haitner et al. (FOCS '07). Our main result is a constantround publiccoin
protocol AMSam
that allows an efficient verifier to emulate a Sam_{d} oracle for any
constant depth d = O(1) with the
help of a BPP^{NP} prover. AMSam allows us to conclude that if
L is decidable by a
kadaptive
randomized oracle algorithm with access to a Sam_{O(1)} oracle, then
L∈AM[k]∩coAM[k].
The above yields the following corollary: assume there exists an
O(1)adaptive reduction that
bases constantround statistically hiding commitment on NPhardness, then
NP ⊆ coAM and
the polynomial hierarchy collapses. The same result holds for any primitive that
can be broken
by Sam_{O(1)} including collisionresistant hash functions and
O(1)round oblivious
transfer where
security holds statistically for one of the parties. We also obtain nontrivial
(though weaker)
consequences for kadaptive reductions for any
k = poly(n). Prior to our work,
most results in
this research direction either applied only to nonadaptive reductions (Bogdanov
and Trevisan,
SIAM J. of Comp. '06) or to primitives with special
regularity properties (Brassard FOCS '79,
Akavia et al., FOCS '06).
The main technical tool we use to prove the above is a new constantround
publiccoin
protocol (SampleWithSize) that we believe may be interesting in its own
right, and that guarantees the following. Given an efficient function
f on n
bits, let D be the output distribution
D = f(U_{n}), then SampleWithSize allows an efficient verifier Arthur to
use an allpowerful prover
Merlin's help to sample a random y ← D along with a good multiplicative
approximation of the
probability p_{y} = Pr_{y' ← D}[y' = y]. The crucial feature of
SampleWithSize
is that it extends even
to distributions of the form D = f(U_{S}), where
U_{S} is the uniform distribution on
an efficiently
decidable subset S ⊆{0.1}^{n} (such
D are called efficiently samplable with
postselection), as
long as the verifier is also given a good approximation of the value
S.

Iftach Haitner
A Parallel Repetition Theorem for Any
Interactive Argument [Abstract]
[Slides]
SIAM
Journal of Computing 2013
[link]
IEEE Symposium on Foundations of Computer Science
(FOCS), 2009 Draft of full version
[PDF]
The question whether or not parallel repetition reduces the soundness error
is a fundamental question in the theory of protocols. While parallel repetition
reduces (at an exponential rate) the error in interactive proofs and (at a weak
exponential rate) in special cases of interactive arguments (e.g., 3message
protocols  Bellare, Impagliazzo and Naor [FOCS '97], and
publiccoin protocols  Håstad, Pass, Pietrzak and WikstrÄom [Manuscript '08]), Bellare et. al
gave example of interactive arguments for which parallel repetition does not
reduce the soundness error at all.
We show that by slightly modifying any interactive argument, in a way
that preserves its completeness and only slightly deteriorates its soundness, we
get a protocol for which parallel repetition does reduce the error at a weak
exponential rate. In this modified version, the verifier flips at the beginning
of each round an (1  (1/4m), 1/4m) biased coin
(i.e., 1 is tossed with probability
1/4m), where m is the round complexity of the
(original) protocol. If the coin is one, the verifier halts the interaction and
accepts, otherwise it sends the same message that the original verifier would.
At the end of the protocol (if reached), the verifier accepts if and only if the
original verifier would.

Iftach Haitner,
MinhHuyen Nguyen, Shien Jin Ong,
Omer Reingold and
Salil Vadhan
Statistically Hiding Commitments and Statistical
ZeroKnowledge Arguments from Any OneWay Function
[Abstract]
SIAM
Journal of Computing 2009
[link]
Draft of full version
[PDF]

Iftach Haitner,
Omer Reingold,
Salil Vadhan and
Hoeteck Wee
Inaccessible Entropy [Abstract]
[Slides]
[Video
(appx. 1 hour)]
Annual ACM
Symposium on Theory of Computing (STOC), 2009
[PDF]
Draft of full version
[PDF]
We put forth a new computational notion of entropy, which measures the (in)feasibility of sampling high entropy strings that are consistent with a given protocol. Specifically, we say that the i’th
round of a protocol (A,B) has accessible entropy at most
k, if no polynomialtime strategy
A* can generate messages for
A
such that the entropy of its message in the i’th round has entropy greater than
k when conditioned both on prior messages of the protocol and on prior coin tosses of
A*.
As applications of this notion, we
 Give a much simpler and more efficient construction of
statistically hiding
commitment schemes from arbitrary oneway functions.
 Prove that constantround statistically hiding commitments are necessary
for constructing constantround zeroknowledge proof systems for NP that remain secure under parallel composition (assuming the existence of oneway functions).

Iftach Haitner,
Alon Rosen and
Ronen Shaltiel
On the (Im)Possibility of ArthurMerlin Witness
Hiding Protocols
[Abstract]
[Slides] Theory of Cryptography Conference (TCC), 2009
[PDF]
The concept of witnesshiding suggested by Feige and
Shamir is a natural relaxation of zeroknowledge. In this paper we identify
languages and distributions for which many known constantround publiccoin
protocols with negligible soundness cannot be shown to witnesshiding using
blackbox techniques.
In particular, our results imply that it is impossible to prove that parallel
repetition of 3Colorability or Hamiltonicity is witnesshiding for
distributions that support instances with exactly one witness, if the proof of
security is by a blackbox reduction that is independent of the
choice of the commitment scheme used in the protocol. This lower bound
conceptually matches an upper bound of Feige and Shamir that uses such a
blackbox reduction to show that parallel repetition of 3Colorability or
Hamiltonicity is
witnesshiding for distributions with ``two independent witnesses.
We also consider blackbox reductions for 3Colorability or Hamiltonicity that
depend on a specific implementation of the commitment scheme.
While we cannot rule out such reductions completely, we show that ``natural
reductions" cannot bypass the limitations above.
Our proofs use techniques developed by Goldreich and Krawczyk for the case of
zero knowledge. The setup of witnesshiding, however, presents new technical and
conceptual difficulties that do not come up in the setup of zeroknowledge. The
high level idea is that if a blackbox reduction establishes the witnesshiding
property for a protocol that is also a proof of knowledge, then this latter
property can be used ``against the reduction" to find witnesses
unconditionally.

Iftach Haitner and
Thomas Holenstein
On the (Im)Possibility of Key Dependent
Encryption [Abstract]
[Slides]
Theory of
Cryptography Conference (TCC), 2009
[PDF]
Draft of full version
[PDF]
We study the possibility of constructing encryption schemes secure under
messages that are chosen depending on the key k of
the encryption scheme itself. We give the following separation results that hold both in the private and in the public key settings:
 Let H be the family of poly(n)wise
independent
hashfunctions. There exists no fullyblackbox reduction
from an encryption scheme secure against keydependent messages to
oneway permutations (and also to families of trapdoor permutations) if
the adversary can obtain encryptions of h(k)
for h ∈ H.
 There exists no reduction from an encryption scheme secure
against keydependent messages to, essentially,
any cryptographic assumption, if the adversary can obtain an encryption of
g(k) for an arbitrary
g, as long as the reduction's proof of
security treats both the adversary and the function g
as black boxes.

Iftach Haitner
New Implications and Improved Efficiency of
Constructions Based on Oneway Functions
PhD. Thesis, 2008
[PDF]

Iftach Haitner
SemiHonest to Malicious Oblivious Transfer 
The
BlackBox Way [Abstract]
[Slides]
Theory of Cryptography Conference
(TCC), 2008 [PDF]
Full version is part of
[BlackBox
Constructions of Protocols for Secure Computation]
Until recently, all known
constructions of oblivious transfer protocols based on
general hardness assumptions had the following form. First,
the hardness assumption is used in a blackbox manner (i.e.,
the construction uses only the input/output behavior of the
primitive guaranteed by the assumption) to construct a
semihonest oblivious transfer, a protocol whose
security is guaranteed to hold only against adversaries that
follow the prescribed protocol. Then, the latter protocol is
``compiled" into a (malicious) oblivious transfer using
nonblack techniques (a Karp reduction is carried in order
to prove an NP statement in zeroknowledge). In their recent
breakthrough result, Ishai, Kushilevitz, Lindel and Petrank
(STOC '06) deviated from the above paradigm, presenting a
blackbox reduction from oblivious transfer to enhanced
trapdoor permutations and to homomorphic encryption. Here we
generalize their result, presenting a blackbox reduction
from oblivious transfer to semihonest oblivious transfer.
Consequently, oblivious transfer can be blackbox reduced to
each of the hardness assumptions known to imply a
semihonest oblivious transfer in a blackbox manner. This
list currently includes beside the hardness assumptions used
by Ishai et al. also the existence of families of dense
trapdoor permutations and of non trivial singleserver
private information retrieval.

We study the
communication complexity of singleserver Private
Information Retrieval (PIR) protocols that are based on
fundamental cryptographic primitives in a blackbox manner.
In this setting, we establish a tight lower bound on the
number of bits communicated by the server in any
polynomiallypreserving construction that relies on trapdoor
permutations. More specifically, our main result states that
in such constructions Ω(n) bits must be communicated
by the server, where n is the size of the server's
database, and this improves the Ω(n / log n) lower
bound due to Haitner, Hoch, Reingold and Segev (FOCS '07).
Therefore, in the setting under consideration, the naive
solution in which the user downloads the entire database
turns out to be optimal up to constant multiplicative
factors. We note that the lower bound we establish holds for
the most generic form of trapdoor permutations, including in
particular enhanced trapdoor permutations.
Technically speaking, this paper consists of two main
contributions from which our lower bound is obtained. First,
we derive a tight lower bound on the number of bits
communicated by the sender during the commit stage of any
blackbox construction of a statisticallyhiding
bitcommitment scheme from a family of trapdoor
permutations. This lower bound asymptotically matches the
upper bound provided by the scheme of Naor, Ostrovsky,
Venkatesan and Yung (CRYPTO '92). Second, we improve the
efficiency of the reduction of statisticallyhiding
commitment schemes to lowcommunication singleserver PIR,
due to Beimel, Ishai, Kushilevitz and Malkin (STOC '99). In
particular, we present a reduction that essentially
preserves the communication complexity of the underlying
singleserver PIR protocol.


We give a construction of
statisticallyhiding commitment schemes (ones where the
hiding property holds information theoretically), based on
the minimal cryptographic assumption that oneway functions
exist. Our construction employs twophase commitment
schemes, recently constructed by Nguyen, Ong and Vadhan
(FOCS `06), and universal oneway hash functions introduced
and constructed by Naor and Yung (STOC `89) and Rompel (STOC
`90).

Iftach Haitner and
Omer Reingold
A New Interactive Hashing Theorem
[Abstract]
[Slides]
Journal of Cryptology
2014
[link]
IEEE Conference on Computational Complexity (CCC), 2007
[PDF]
Draft of full version
[PDF]
Interactive hashing, introduced by Naor, Ostrovsky, Venkatesan and Yung (CRYPTO '92),
plays an important role in many cryptographic protocols. In particular,
interactive hashing is a major component in all known constructions
(which are based on general oneway permutations and oneway functions)
of statistically hiding commitment schemes and of statistical
zeroknowledge arguments. Interactive hashing with respect to a oneway
permutation f, is a
twoparty protocol that enables a sender that knows y=f(x) to transfer a random hash
z=h(y) to a receiver. The
receiver is guaranteed that the sender is committed to y (in the sense that it cannot come up with
x and
x' such that f(x) ≠ f(x'), but
h(f(x))=h(f(x'))=z). The sender is guaranteed
that the receiver does not learn any additional information on
y. In particular, when
h
is a twotoone hash function, the
receiver does not learn which of the two preimages
{y,y'}=h^{1}(z)
is the one the sender can invert with respect to
f.
This paper reexamines the notion of interactive hashing.
We give an alternative proof for the Naor et al. protocol,
which seems significantly simpler and more intuitive than
the original one. Moreover, the new proof achieves much
better parameters (in terms of how security preserving the
reduction is). Finally, by applying our new proof to a close
variant of the Naor et al. protocol, we achieve a more
versatile interactive hashing theorem for a more general
setting than that of the Naor et al. protocol.
sssss

Iftach Haitner,
Danny Harnik and
Omer Reingold
On the Power of the Randomized Iterate [Abstract]
[Slides]
SIAM
Journal of Computing 2011
[link]
Advanced in Cryptology  CRYPTO, 2006
[PDF]
Draft of full version
[PDF]
We consider two of the
most fundamental theorems in Cryptography. The first, due to
Håstad, Impagliazzo, Levin and Luby (STOC '89, STOC '90,
SIAM J. on Computing '99), is that pseudorandom generators
can be constructed from any oneway function. The second,
due to Yao (FOCS '82), states that the existence of weak
oneway functions implies the existence of full fledged
oneway functions. These powerful plausibility results shape
our understanding of hardness and randomness in
Cryptography, but unfortunately their proofs are not as
tight (i.e., security preserving) as one may desire..r> This work revisits a technique that we call the
randomized iterate, introduced by Goldreich, Krawczyk
and Luby (SIAM J. on Computing '93). This technique was used
by Goldreich et al. to give a construction of pseudorandom
generators from regular oneway functions. We simplify and
strengthen this technique in order to obtain a similar
construction where the seed length of the resulting
generators is as short as Θ(n log n) rather than
Θ(n^{3}) achieved by Goldreich et al. Our technique has
the potential of implying seedlength Θ(n), and the
only bottleneck for such a result is the parameters of
current generators against space bounded computations. We
give a construction with similar parameters for security
amplification of regular oneway functions. This improves
upon the construction of Goldreich, Impagliazzo, Levin,
Venkatesan and Zuckerman (FOCS '90) in that the construction
does not need to ``know" the regularity parameter of the
functions (in terms of security, the two reductions are
incomparable). In addition, we use the randomized iterate to
show a construction of a pseudorandom generator based on an
exponentially hard oneway function that has seed length of
only Θ(n^{2}). This improves a recent result of
Holenstein (TCC '06) that shows a construction with seed
length Θ(n^{5}) based on such oneway functions.
Finally, we show that the randomized iterate may even be
useful in the general context of Håstad et al. In
particular, we use the randomized iterate to replace the
basic building block of the Håstad et al. construction.
Interestingly, this modification improves efficiency by an
n^{3} factor and reduces the seed length to
Θ(n^{7})
(which also implies improvement in the security of the
construction).

Iftach Haitner,
Danny Harnik and
Omer Reingold
On Efficient Pseudorandom Generators from
Exponentially Hard OneWay Functions [Abstract]
[Slides]
International
Colloquium on
Automata,
Languages and
Programming  ICALP, 2006
[PDF]
Full version is part of
[On
the Power of the Randomized Iterate]
We show a construction of
a pseudorandom generator from any exponentially hard oneway
function with a blowup of only Θ(n^{2}). This Improves the
recent Θ(n^{5}) construction of Holenstein (TCC '06). Our
technique uses the tools recently presented in Haitner et
al. (Crypto '06) for the setting of regular oneway
functions, and further develops them.

Iftach Haitner, Omer Horvitz,
Jonathan Katz,
ChiuYuen Koo,
Ruggero Morselli and
Ronen Shaltiel
Reducing Complexity Assumptions for
StatisticallyHiding Commitment [Abstract]
[Slides]
Journal of Cryptology 2009
[link] Advances in Cryptology  Eurocrypt,
2005
[PDF]
Draft of full version
[PDF]
We revisit the following
question: what are the minimal assumptions needed to construct
statisticallyhiding commitment schemes? Naor et al. show how
to construct such schemes based on any oneway permutation. We
improve upon this by showing a construction based on any approximable preimagesize oneway function. These are
oneway functions for which it is possible to efficiently
approximate the number of preimages of a given output. A special
case is the class of regular oneway functions where all
points in the image of the function have the same (known) number
of preimages.
We also prove two additional results related to
statisticallyhiding commitment. First, we prove a (folklore)
showing, roughly speaking, that
the statistical hiding property of any such commitment scheme is
amplified exponentially when multiple independent parallel
executions of the scheme are carried out. Second, we show a compiler which transforms any commitment scheme which is
statistically hiding against an honestbutcurious receiver into
one which is statistically hiding even against a malicious
receiver.

Iftach Haitner Implementing Oblivious Transfer Using a Collection of Dense Trapdoor
Permutations [Abstract]
[Slides]
Theory of Cryptography Conference (TCC), 2004
[PDF]
Draft of full version
[PDF]
