Colloquium talks list
November 9 11:1512:15
Room: Schreiber 309
Yehuda Koren, Yahoo! Research
The Netflix Prize: Quest for $1,000,000
The collaborative filtering approach to recommender systems predicts
user preferences for products or services by learning past useritem
relationships. Their significant economic implications made
collaborative filtering techniques play an important role at known
etailers such as Amazon and Netflix. This field enjoyed a surge of
interest since October 2006, when the Netflix Prize competition was
commenced. Netflix released a dataset containing 100 million anonymous
movie ratings and challenged the research community to develop
algorithms that could beat the accuracy of its recommendation system,
Cinematch. In this talk I will survey the competition together with some
of the principles and algorithms, which have led us to winning the
Progress Prizes in the competition.
Bio:
Yehuda Koren completed his PhD in CS at The Weizmann Institute on 2003.
He was with AT&T Research during 20032008, and recently joined Yahoo!
Research (Haifa).
November 13 17:0018:00
Room: Schreiber 006
Leslie Valiant, Harvard University
When Biology is Computation
We argue that computational models have an essential role in uncovering the
principles behind a variety of biological phenomena that cannot be
approached by other means. In this talk we shall focus on evolution.
Living organisms function according to complex mechanisms that operate in
different ways depending on conditions. Darwin's theory of evolution
suggests that such mechanisms evolved through random variation guided by
natural selection. However, there has existed no theory that would explain
quantitatively which mechanisms can so evolve in realistic population sizes
within realistic time periods, and which are too complex.
We start with the observation that Darwin's theory becomes a computational
theory once one is specific about how exactly the "variation" and the
"selection" are done. The dilemma in being specific is that if one is too
restrictive about the basic mechanisms allowed then it becomes implausible
that all the complexity of biology can be expressed in terms of them, while
if one is too expressive then it is implausible that random variation with
selection can successively navigate this complex space as changing
conditions require.
In order to analyze this problem we treat Darwinian evolution as a form of
computational learning from examples in which the course of learning is
influenced only by the aggregate fitness of the current hypothesis on the
examples, and not otherwise by specific examples. We formulate a notion of
evolvability that distinguishes function classes that are evolvable with
polynomially bounded resources from those that are not. For example, we can
show that monotone Boolean conjunctions and disjunctions are demonstrably
evolvable over the uniform distribution, while Boolean parity functions are
demonstrably not. We suggest that the overall mechanism that underlies
biological evolution is "evolvable target pursuit", which consists of a
series of evolutionary stages, each one somewhat predictably pursuing an
evolvable target in the technical sense suggested above, each target being
rendered evolvable by the serendipitous combination of the environment and
the outcomes of previous evolutionary stages.
November 23 11:1512:15
Room: Schreiber 309
Yuri Gurevich, Microsoft Research
Security: Access Control for Distributed Systems
In an increasingly interconnected world, access control quickly
grows in complexity and importance. How to make distributed
access secure, uniform, practical and amenable to analysis?
We present Distributed Knowledge Authorization Language (DKAL)
that achieves substantially greater expressivity than the previous
languages within the same feasibility constraints. It solves the
information leak problem of the previous languages. It has a more
robust logical foundation. The talk does not presuppose access
control expertise. We present the language and an application,
and we discuss various aspects of distributed access control.
Bio:
Yuri Gurevich is Principal Researcher at Microsoft. He is also
Prof. Emeritus at University of Michigan, ACM Fellow, Guggenheim
Fellow, a member of Academia Europaea, and Dr. Honoris Causa of
a couple of universities.
November 30 11:1512:15
Room: Schreiber 309
Dan Tsafrir, IBM T.J. Watson Research Center
Portably preventing file race attacks with usermode path resolution
The filesystem API of contemporary systems exposes programs to TOCTTOU
(time of check to time of use) racecondition vulnerabilities, which
occur between pairs of check/use system calls that involve a name of a
file. Existing solutions either help programmers to detect such races
(by pinpointing their location) or prevent them altogether (by
altering the operating system). But the latter alternative is not
prevalent, and the former is just the first step: programmers must
still address TOCTTOU flaws within the limits of the existing API with
which several important tasks can not be safely accomplished in a
portable straightforward manner. The recent "filesystem maze" attack
further worsens the problem by allowing adversaries to
deterministically win races and thus refuting the common perception
that the risk is small. In the face of this threat, we develop a new
algorithm that allows programmers to effectively aggregate a
vulnerable pair of distinct system calls into a single operation that
is executed "atomically". This is achieved by emulating one kernel
functionality in user mode: the filepath resolution. The surprisingly
simple resulting algorithm constitutes a portable solution to a large
class of TOCTTOU vulnerabilities, without requiring modifications to
the underlying operating system.

Joint work with Tomer Hertz (Microsoft Research), David Wagner
(Berkeley), and Dilma Da Silva (IBM T.J. Watson Research Center).
Based on http://www.usenix.org/events/fast08/tech/tsafrir.html
USENIX File & Storage Technologies (FAST'08). Awarded best paper.
December 7 11:1512:15
Room: Schreiber 309
Anna Zamensky, Tel Aviv University
Nondeterministic Matrices and their Applications
One of the main principles of classical and manyvalued logic is
truthfunctionality: the truthvalue assigned to a complex formula is
uniquely determined by the truthvalues of its subformulas. In commonsense
reasoning, however, an agent often needs to deal with inherently
nondeterministic phenomena: partially unknown information, faulty devices
and ambiguity of natural language are only a few cases in point. It is
clear, that nondeterminism, the very essence of which is contradictory to
the principle of truthfunctionality, cannot be captured by classical or
manyvalued logics. One possible solution is to borrow the idea of
nondeterministic computations from automata and computability theory and to
apply it to evaluations of formulas. This leads to the introduction of
Nondeterministic Matrices (Nmatrices), which are a generalization of
standard manyvalued matrices, in which the truthvalue of a complex formula
is chosen nondeterministically out of a certain set of options.
In this talk we survey the theory and a number of applications of Nmatrices
in different areas. We describe the extension of the framework of Nmatrices
to languages with quantifiers and address
some problems that were not evident on the propositional level. Then we
apply the extended framework to provide modular semantics for various
nonclassical logics. One such example is in the area of paraconsistent
logics, which are logics that allow nontrivial inconsistent theories and
are useful for reasoning with inconsistent information. Another application
of Nmatrices is in proof theory. We use finite Nmatrices to provide
semantics for a general class of labeled calculi (the wellknown firstorder
Gentzentype systems can be thought of as a specific instance of such
calculi), and to characterize such important syntactic properties as
cutelimination, invertibility of rules, etc.
The research is conducted under the supervision of Prof. Arnon Avron.
December 14 11:1512:15
Room: Schreiber 309
Dana Ron, Tel Aviv University
Some Techniques in Property Testing
In broad terms, property testing is the study of the
following class of problems: Given the ability to perform
(local) queries concerning a particular object (e.g.,
a function, or a graph), the task is to determine whether
the object has a predetermined (global) property (e.g.,
linearity or 3colorability), or is far from having the
property (i.e., should be modified significantly so as to
obtain the property).
Thus property testing is a relaxation of exact decision
making, and can be viewed as approximate decision making.
As such, property testing algorithms are expected to be
much more efficient than the corresponding exact decision
procedures. In particular, we would like the algorithm to
inspect only a small (randomly selected) part of the whole
object, and to run in time sublinear (or even independent)
of the size of the object, where a small probability of
failure is allowed.
Property testing of functions was first explicitly defined
by Rubinfeld and Sudan in the context of program testing,
where their focus was on testing algebraic properties of
functions. The study of properties of combinatorial objects,
and in particular graphs, was initiated by Goldreich,
Goldwasser and Ron. In recent years, property testing has
been studied in many additional contexts.
In this talk I will not attempt to provide a comprehensive
survey, but rather I shall describe a few examples that
will highlight several analysis techniqes and
give some flavor of research on property testing. I will
also mention some recently studied extensions.
December 21 11:1512:15
Room:
Tali Kaufman, MIT
Building on Conflicts: Computational Studies of Codes
Many interesting questions in coding theory revolve around conflicts
(tradeoffs). Among them are a conflict between structure and
randomness, and a conflict between density and symmetry. In some of
the cases the conflicts are helpful in understanding solutions for
questions (e.g. using the structure and randomness conflict we
obtain improved analysis of Reed Muller codes), while in other cases
these conflicts shed light on the inherent difficulty of the
questions.
In the talk, I'll explain these conflicts and demonstrate their
relations to some important questions in computer science, such as
locally testable and correctable codes. The underlining goal is that
better understanding of these conflicts will lead to better codes
and improved analysis of known codes.
Sunday, December 28, 11:1512:15
Room: Schreiber 309
Nir Ailon, Google Research
New Algorithms for Ranking and Dimension Reduction
The study of ranking crosses many disciplines.
Social choice theoreticians have been interested for centuries
in finding a good way to rank a set of candidates. Econometricians
have been asking for decades how people choose from (and more
generally rank) alternative sets. More recently, information retrieval
theoreticians and practitioners have been interested in ranking search
query results. In verification, practitioners have been interested
in ordering variable sets so as to reduce the time to detect software
errors. These recent problems have been identified in both machine
learning and combinatorial optimization communities. I will present
my contribution to both fronts, and show a connection to
classic theories of social choice and econometrics.
Randomized algorithms have been using measure concentration phenomena in
countless cases, and in particular in dimension reduction and streaming.
These tools have become a standard black box, and their computational
aspects have been almost taken for granted. In recent work I have
discovered
an exciting and surprising new method for computing a standard
dimension reduction algorithm (JohnsonLindenstrauss) more efficiently
than previously assumed. This method has been later used to speed up
various standard high dimensional linear algebraic computations
(such as SVD) and fueled increased collaboration between computer science
and analysis. The main ingredient is the use of a Fast Fourier Transform
and
a type of uncertainty principle.
Sunday, December 28, 13:3014:30
Room: Schreiber 006
Christos Papadimitiou
University of California, Berkeley
The Algorithmic Lens: How the Computational Perspective is Transforming the Sciences
Computational research transforms the sciences (physical, mathematical, life or social)
not just by empowering them analytically, but mainly by providing a novel and powerful
perspective which often leads to unforeseen insights. Examples abound: quantum
computation provides the right forum
for questioning and testing some of the most basic tenets of quantum physics, while
statistical mechanics has found in the efficiency of randomized algorithms a powerful
metaphor for phase transitions. In mathematics, the P vs. NP problem has joined the list
of the most profound and
consequential problems, and in economics considerations of computational complexity
revise predictions of economic behavior and affect the design of economic mechanisms such
as auctions. Finally, in biology some of the most fundamental problems, such as
understanding the brain and evolution, can be productively recast in computational terms.
Thursday, January 1, 14:0015:00
Room: Schreiber 309
Regina Barzilay, MIT
Climbing the Tower of Babel: Advances in Unsupervised Multilingual learning
For most natural language processing tasks, unsupervised methods significantly
underperform their supervised counterparts. In this talk, I will demonstrate
that multilingual learning can narrow this gap. The key insight is that joint
learning from several languages reduces uncertainty about the linguistic
structure of individual languages. These methods exploit the deep structural
connections between languages, connections that have driven many important
discoveries in anthropology and historical linguistics.
I will present multilingual unsupervised models for morphological
segmentation and partofspeech tagging. Multilingual data
is modeled as arising through a combination of
languageindependent and languagespecific probabilistic processes. This
approach allows the model to identify and learn from recurring crosslingual
patterns, ultimately to improve prediction accuracy in each language.
I will also discuss
ongoing work on unsupervised decoding of ancient Ugaritic tablets using
data from related Semitic languages.
This is joint work with Benjamin Snyder, Tahira Naseem and Jacob Eisenstein.
Sunday, January 4, 11:1512:15
Room: Schreiber 309
Oded Goldreich, Weizmann Institute of Science
On Proximity Oblivious Testing
We initiate a systematic study of a special type of property testers.
These testers consist of repeating a basic test
for a number of times that depends on the proximity parameters,
whereas the basic test is oblivious of the proximity parameter.
We refer to such basic tests by the term proximityoblivious testers.
While proximityoblivious testers were studied before 
most notably in the algebraic setting 
the current study seems to be the first one to focus on graph properties.
We provide a mix of positive and negative results,
and in particular characterizations of the graph properties that have
constantquery proximityoblivious testers in the two standard models
(i.e., the adjacency matrix and the boundeddegree models).
Furthermore, we show that constantquery proximityoblivious testers
do not exist for many easily testable properties,
and that even when proximityoblivious testers exist repeating them
does not necessarily yield the best standard testers
for the corresponding property.
Joint work with Dana Ron
January 11 11:1512:15
Room: Schreiber 309
Guy Rothblum , MIT
Delegating Computation Reliably: When, How and Why
In an emerging computing paradigm, computational capabilities, from
processing power to storage capacities, are offered to users over
communication networks as a service. This new paradigm holds enormous
promise, especially for increasing the utility of computationally weak
devices. A natural approach is for weak devices to delegate expensive
tasks, such as storing a large file or running a complex computation,
to more powerful entities (say servers) connected to the same network.
While the delegation approach seems promising, it raises immediate
concerns:
When and how can a weak device verify that a computational task was
completed correctly? For example, how can such a device verify that a
storage server did not corrupt large files? Or that a computation
server performed a costly computation correctly?
This practically motivated question touches on foundational questions
in cryptography and complexity theory. In fact, the approach of
delegating computation has become a goldmine for improving the
efficiency of cryptographic and algorithmic schemes.
In the talk I will survey recent progress on understanding when and
how and for what purpose tasks such as storage or computation can be
delegated reliably. As an example, we will see that a rich family of
expensive computations can be delegated efficiently, even if the weak
verifying device can only run in quasilinear time, logarithmic space,
or constant parallel running time.
January 18 11:1512:15
Room: Schreiber 309
Michal Irani, Weizmann Institute
Predicting the invisible; Detecting the unexpected
In this talk I will show how complex visual inference tasks can be performed with no
prior examples by exploiting redundancy within and across different parts of the visual
data. Comparing and integrating local pieces of visual information gives rise to complex
notions of visual similarity and to a general "Inference by Composition" approach. This
allows to infer about the likelihood of new visual data that was never seen before, make
inferences about complex static and dynamic visual information without any prior examples
or prior training. I will demonstrate the power of this approach to several example
problems (as time permits):
1. Detecting complex objects and actions (often based only on a rough handsketch of
what we are looking for).
2. Prediction and completion of missing information in images and videos.
3. Summarization of visual data  Generating small/short visual summaries of
images/videos.
4. Detecting the "irregular" and "unexpected" (e.g., suspicious behaviors, defects,
saliency, attention).
5. "Segmentation by Composition"  Segmentation and extraction of complex objects in
images.
Sunday, January 25, 11:1512:15
Room: Schreiber 309
Benny Applebaum,Princeton University
Cryptography in Constant Parallel Time and its Applications
How much computational power is required to perform basic cryptographic
tasks?
We will survey a number of recent results on the complexity of basic
cryptographic primitives such as oneway functions, pseudorandom generators,
encryption schemes and signatures. Specifically, we will consider the
possibility of computing instances of these primitives by NC0 functions, in
which each output bit depends on only a constant number of input bits.
Somewhat surprisingly and unintuitively, it turns out that most
cryptographic tasks can be carried out by such simple functions. We will
also explore the cryptographic strength of several interesting subclasses of
NC0.
On the application side, we will mention new connections between NC0
cryptography and other seemingly unrelated areas such as secure distributed
computation, program checking, and hardness of approximation. We will focus
on a new publickey encryption scheme whose security is based on two new
assumptions: one related to the pseudorandomness of certain simple NC0
generator, and the second based on the hardness of detecting small
nonexpanding sets in sparse random graphs. Most, if not all, existing
constructions of publickey encryption use hardness assumptions with
significant algebraic structure. The main advantage of the new scheme is the
relatively general and unstructured nature of the new assumptions, which
seem qualitatively different than previous ones.
Based on joint works with Yuval Ishai and Eyal Kushilevitz and with Boaz
Barak and Avi Wigderson. The talk is selfcontained, and does not assume
previous background in cryptography.
Sunday, February 8, 11:1512:15
Room: Schreiber 309
Orr Dunkelman, Ecole Normale Superieure
Treatment of the Initial Value in TimeMemoryData Tradeoff Attacks on
Stream Ciphers
TimeMemory Tradeoff (TMTO) attacks on stream ciphers are a serious security threat and
the resistance to this class of attacks is an important criterion in the design of a
modern stream cipher. TMTO attacks are especially effective against stream ciphers where
a variant of the TMTO attack can make use of multiple data to reduce the offline and the
online time complexities of the attack (given a fixed amount of memory).
In this talk we present a new approach to TMTO attacks against stream ciphers using a
publicly known initial value (IV): We suggest not to treat the IV as part of the secret
key material (as done in current attacks), but rather to choose in advance some IVs and
apply a TMTO attack to streams produced using these IVs. We show that while the obtained
tradeoff curve is identical to the curve obtained by the current approach, the new
technique allows to mount the TMTO attack in a larger variety of settings. For example,
if both the secret key and the IV are of length n bits, it is possible to mount an attack
with data, time, and memory complexities of 2^{4n/5}, while in the current approach,
either the time complexity or the memory complexity is not less than 2^n. We conclude
that if the IV length of a stream cipher is less than 1.5 times the key length, there
exists an attack on the cipher with data, time, and memory complexities less than the
complexity of exhaustive key search.
This is a joint work with Nathan Keller.
Sunday, March 1, 11:1512:15
Room: Schreiber 006
Gil David, Tel aviv University
Anomaly detection and classification via diffusion processes in hypernetworks
Anomaly detection identifies patterns that do not conform to an established normal
behavior. The
detected patterns are often translated to critical and actionable information in several
application domains since they deviate from their normal behavior. The main challenges
anomaly
detection algorithms face are: achieving low false alarm rate, clustering into normal and
abnormal
regions, stealthiness by a malicious adversary, dynamic and evolving data, portability of
the
techniques between different domains and applications, availability of labeled data for
training/validation and noisy data that tends to be similar to real anomalies.
Usually, in addition to the challenge of detecting anomalies in a dataset, the analyzed
data is
also highdimensional. Since the data in most modern systems can be described by hundreds
and even
thousands of parameters, then the dimensionality of the data is very high and its
processing
becomes impractical. ``Curse of dimensionality'' is associated with highdimensional data
due to
the fact that as the dimensionality of the input data space increases, it becomes
exponentially
more difficult to process and analyze the data. Furthermore, adding more dimensions can
increase
the noise, and hence the error. This problem is a significant obstacle for
highdimensional data
analysis, since a local neighborhood in high dimensions is no longer local. Therefore,
highdimensional data is incomprehensible to understand, to draw conclusions from or to
find
anomalies that deviate from their normal behavior.
Anomaly detection in highdimensional data is in extensive use in a wide variety of
areas. For
example, in communication networks it is used for identifying intrusions by internal or
external
users, for detecting faulty components and for recognition of unauthorized protocols and
transactions. Another challenging domains are financial applications, intelligence
systems,
critical systems that contain multisensors, to name some.
In the thesis, we focus on anomaly detection in highdimensional data for the following
major
domains:
1. Intrusion detection/prevention systems have become an integral component in security
systems.
The challenge is to perform online protection without missdetections and false alarms.
To achieve
it, most systems are based on signatures of intrusions that are developed and assembled
manually
after a new intrusion is exposed and distributed to the security clients. This approach is
problematic because these systems detect only already known intrusions (yesterday's
attacks) but
they fail to detect new attacks (zero day attacks).
2. Network traffic classification and recognition is a critical component in many Internet
applications such as traffic control, identification of specific applications, guarantee
of QoS,
etc. Until recently, classification was mainly done by inspecting the payload of traffic
packets,
checking for an application signature. While payload inspection techniques have worked
well in the
past, they suffer from major limitations: applications such as Skype use techniques to
avoid
protocol identification, payload becomes encrypted and many new protocols per year are
introduced.
Therefore, the reactive development of a signature for each new protocol is a challenging
task on
one hand and impractical on the other hand.
We will introduce a unique framework that is based upon diffusion processes, diffusion
geometries,
random projections and other methodologies for finding meaningful geometric descriptions
in
highdimensional datasets. We will show that the eigenfunctions of the generated
underlying Markov
matrices can be used to construct diffusion processes that generate efficient
representations of
complex geometric structures for highdimensional data analysis. This is done by
nonlinear
transformations that identify geometric patterns in these huge datasets that find the
connections
among them while projecting them onto low dimensional spaces. Our methods automatically
detect the
anomalies that deviate from normal behavior.
We will also introduce the Localized Diffusion Folders methodology for clustering and
classification of highdimensional datasets. The diffusion folders are multilevel
partitioning of
the data into local neighborhoods that achieved by several random selections of data
points and
folders in the graph and by defining local diffusion distances between them. This
multilevel
partitioning defines an improved localized geometry of the data and a new localized Markov
transition matrix that is used for the next time stage in the diffusion process. The
result of this
clustering method is a bottomup hierarchical clustering of the data while each level in
the
hierarchy contains localized diffusion folders of folders from the lower levels.
The performance of the proposed algorithms are demonstrated on data that was collected
from real
networks.
Sunday, march 8, 11:1512:15
Room: Schreiber 006
Amir Ban,
The Hebrew University
Machine Learning of Evaluation (with Applications to Computer Chess)
The standard and highly successful approach to computer Chess, as well as to
similar zerosum, completeinformation games (Checkers, Backgammon, Go,
GoMoku etc.), combines game tree search with position evaluation.
Theoretical advances in computer chess (and similar games) have been
confined almost entirely to aspects of tree search. On the other hand, few
theories of evaluation have been proposed, and none have been universally
accepted. Though participants in the field routinely speak about the
correctness or incorrectness of a specific position evaluation, there does
not exist an empirical, let alone a theoretical understanding of what this
statement means. There are no published applications of machine learning of
evaluation helping a program, initially close to stateoftheart strength,
to improve in its performance, let alone dominate its rivals.
The development of Junior (a.k.a. Deep Junior), among the strongest computer
chess programs for the past decade, is guided by a novel theory of
evaluation, giving meaning to questions about the correctness of a position
evaluation, enabling comparison of evaluation functions, and by extension,
enabling the evaluation function to be improved and learned. Junior has won
several world championships using this method, and its playing style has
changed to its trademark enterprising character. In this talk I will derive
and describe a model of evaluation as a predictor
of the game result. Standing this model on its head, I will show how the
quality of evaluation can be objectively assessed, and how individual
position evaluations and indeed entire evaluation functions may be compared
for correctness. I will show how the tuning of optimal parameters may be
viewed as a maximum likelihood problem, to be tackled by traditional
optimization techniques, and review results obtained by this method in
Junior.
Sunday, March 15, 11:1512:15
Room: Schreiber 006
Nati Linial ,The Hebrew University
The wonders of graph spectra
As we learn (and teach) in our undergraduate classes, a good way to represent a
graph G is by means of its adjacency matrix A=A_G. There are other matrices that are
naturally associated with graphs such as the discrete Laplacian L=L_G. It turns out that
there is a good deal of information about G that can be read out from the eigenvalues and
eigenvectors of these matrices. The most wellknown among these is the close connection
between the spectral gap of A (or the first positive eigenvalue of L) and the expansion
of G. Much less is known about the eigenfunctions of these matrices and their
relationship with properties of G. I have been fascinated by these topics for a long time
and this seminar I will try to share with you my excitement with these topics.
My talk will be based on several papers written jointly with: Y. Bilu, D. Puder, Y. Dekel
and J. Lee
Sunday, March 22, 11:1512:15
Room: Schreiber 006
Yuval Rabani, Technion
Monotonicity in bargaining networks
We study bargaining networks, discussed in a recent paper of Kleinberg and
Tardos (STOC 2008), from the perspective of cooperative game theory. In
particular we examine two solution concepts, the nucleolus and the core
center. Both solution concepts define unique solutions, so they provide
testable predictions. In general coalitional games, both solutions are weakly
monotone, but they are not strongly monotone. We define a new monotonicity
property that is a natural axiom of any bargaining game solution, and we prove
that both the nucleolus and the core center satisfy this monotonicity property.
Our proofs are based on a primaldual argument (for the nucleolus) and on the
FKG inequality (for the core center). We further observe some qualitative differences
between the two solution concepts. In particular, there are cases where a strict
version of our monotonicity property is a natural axiom, but only the core center
satisfies it. On the other hand, the nucleolus is easy to compute, whereas computing
the core center is #Phard (yet it can be approximated in polynomial time).
This is joint work with Yossi Azar, Nikhil Devanur, and Kamal Jain.
Sunday, March 29, 11:1512:15
Room: Schreiber 006
Adi Shamir, Weizmann Institute
How to Solve it: New Techniques in Algebraic Cryptanalysis
In this talk I will introduce a new kind of attack (called Cube
Attack) on cryptographic schemes which can be represented by an (unknown)
low degree polynomial with tweakable public variables such as a plaintext
or IV and fixed secret variables such as a key. Its complexity is
exponential in the degree but only polynomial in the key size, and it was
successfully applied to several concrete cryptographic schemes.
The talk will be self contained, requiring no prior knowledge in
cryptanalysis. It is joint work with Itai Dinur.
Sunday, April 19, 11:1512:15
Room: Schreiber 006
Irit Dinur, The Weizmann Institute
Composition of PCPs
PCPs are proofs that can be verified randomly, looking at very few proof
bits. Ever since the celebrated PCP theorem proven in the early 90’s,
all known PCP verifiers for NP are based on composition: the idea, due
to Arora and Safra, of having one PCP verifier invoke another PCP
verifier as a subroutine.
However, unlike the straightforward notion of subroutines for
algorithms, the PCP
analog is much more subtle. In fact, it took several years to be able to
translate the “composition paradigm” into a well formulated PCP
composition theorem that stands on its own. This is one of the main
reasons for the fact that PCP constructions are often quite complicated.
In my talk I will describe a new modular composition theorem for
"strong" PCPs: PCPs that make only two queries into the proof and have
low soundness error. Such PCPs are equivalent to the label cover
problem, and are the starting point for a large number of
inapproximability results. I will show how our composition theorem can
be used to give a considerably simpler proof for the recent breakthrough
result of Moshkovitz and Raz.
Based on joint work with Prahladh Harsha
Sunday, April 26, 11:1512:15
Room: Schreiber 006
Nachum Dershowitz , Tel aviv University
Axiomatization and Proof of the ChurchTuring Thesis
Church’s Thesis asserts that the only numeric functions that can be
calculated by effective means are the recursive ones, which are the
same
(extensionally) as the Turingcomputable numeric functions.
Yuri Gurevich's Abstract State Machine Theorem states that every
classical algorithm
is emulated (step for step) by an abstract state machine, which is a
most generic model of sequential computation.
That theorem presupposes three natural postulates about algorithmic
computation. By augmenting those postulates with an additional
requirement regarding basic operations, a natural axiomatization of
computability
and a proof of Church’s Thesis obtain, as Godel and others suggested
may be possible.
In a similar way, but with a different set of basic operations, one
can prove Turing’s Thesis,
characterizing the effective string functions, and  in particular 
the
effectivelycomputable functions on string representations of numbers.
Joint work with Yuri Gurevich
Sunday, May 3, 11:1512:15
Room: Schreiber 006
Dieter van Melkebeek , U. of Wisconsin
Bounds for Satisfiability and Related Problems
Ever since the work of Cook and Levin, satisfiability has been
recognized as a central problem in computational complexity. It is
widely believed to be intractable, and yet till recently even a
lineartime logarithmicspace algorithm was not ruled out.
The last few years have seen considerable progress in timespace lower
bounds for satisfiability and closely related problems on various
models of computation. This talk will describe the stateoftheart on
deterministic, randomized, and quantum machines, survey the underlying
ideas, and present some of the arguments involved.
Sunday, May 17, 11:1512:15
Room: Schreiber 006
Johns Hopkins University
Security Issues in Electronic Voting
The 2000 election debacle in Florida led to the quick passage of the
Help America Vote Act (HAVA), in which the United States Congress
appropriated $3.9 Billion to the states to purchase electronic voting
equipment. Vendors sprang out of nowhere, and many states quickly used
their HAVA funds to purchase Direct Recording Electronic (DRE) paperless
voting machines. Many of these machines have subsequently been found to
contain serious security vulnerabilities. In this talk, I will describe
the technical and political issues surrounding electronic voting in the
United States. We will explore some of the studies that were done around
electronic voting machines, as well as risks and mitigations. I will
also describe the NSF ACCURATE Center, that was funded to study
technological solutions to voting, as well as our center's outreach
activities. Finally, I will describe my lab's recent research effort to
build a demonstration voting system with a minimal trusted computing
base (TCB).
Bio:
Dr. Aviel D. Rubin is Professor of Computer Science and Technical
Director of the Information Security Institute at Johns Hopkins
University. Professor Rubin directs the NSFfunded ACCURATE center for
correct, usable, reliable, auditable and transparent elections. Prior to
joining Johns Hopkins, Rubin was a research scientist at AT&T Labs. He
is also a cofounder of Independent Security Evaluators
(securityevaluators.com), a security consulting firm. Rubin has
testified before the U.S. House and Senate on multiple occasions, and he
is author of several books including Brave New Ballot (Random House,
2006) Firewalls and Internet Security, second edition (with Bill
Cheswick and Steve Bellovin, Addison Wesley, 2003), WhiteHat Security
Arsenal (Addison Wesley, 2001), and Web Security Sourcebook (with Dan
Geer and Marcus Ranum, John Wiley & Sons, 1997). In 2004 Baltimore
Magazine name Rubin a Baltimorean of the Year for his work in
safeguarding the integrity of our election process, and he is also the
recipient of the 2004 Electronic Frontiers Foundation Pioneer Award.
Rubin has a B.S, ('89), M.S.E ('91), and Ph.D. ('94) from the University
of Michigan.
Sunday, May 29, 11:1512:15
Room: Schreiber 006
Oliviero Stock, FBKirst, Trento, Italy
Cafe table and humour as computational persuasive systems
Persuasion is an art with a long tradition. Argumentation is only one of the available means, and several scholars have emphasized the importance of a peripheral route to persuasion, the one that for instance is at the basis of most forms of advertisements. Computers can help provide flexibility and adaptivity to persuasion and in this talk I will talk about some persuasive systems we are developing.
The dimension of small group facetoface interaction has recently become an important topic of research. In this talk I take a specific aim: can technology help orienting natural face to face communication to a cultural discussion?
One main contextual aspect for our work has been the museum visit. According to recent studies, a museum visit by a small group (e. g. a family or a few friends) can be considered successful if conversation about the cultural experience develops among its members. People often stop at the museum cafe to have a break during the visit or before leaving. We foresee it is the ideal situation to introduce a tabletop interface meant to foster the conversation of the visitors.
A second theme is devoted to a fundamental resource highly exploited in advertising: humor. Simple forms of humor are often used in promotion messages. A typical case is the ironic variation of familiar expressions so to evoke a target concept. We have developed a technique for producing this kind of expressions automatically, so that novel one can be yielded in a given situation.
About the speaker:
Oliviero Stock has been at ITCirst since 1988 and has been its Director from 1997 to 2001. His activity is mainly in artificial intelligence, natural language processing, intelligent interfaces, cognitive technologies. He is the author of nearly two hundred peerreviewed papers and author or editor of twelve volumes, and member of the editorial board of a dozen scientific journals. He has also been an invited speaker at over sixty conferences; participant or coordinator at over twenty panels. He has been Chairman of the European Coordinating Committee for Artificial Intelligence, President of the Association for Computational Linguistics and of the Italian AI Association and is an ECCAI Fellow and a AAAI Fellow. He has coordinated many projects, including an ongoing collaboration project with University of Haifa and BarIlan University.
Sunday, June 7, 11:1512:15
Room: Schreiber 006
Solomon Golomb, University of South California
THE SEARCH FOR COSTAS ARRAYS
A Costas array of order n can be defined as an nxn permutation matrix in which, among all
vectors connecting pairs of 1's in the matrix, no two agree in both magnitude and slope.
These arrays, when interpreted as frequencyhopping patterns for radar or sonar, have
ideal "thumbtack" ambiguity functions. Several systematic constructions, all based on
primitive roots in finite fields, provide examples for infinitely many values of n.
Computer searches have found all the examples of Costas arrays for all orders n<=26.
No examples are known for n=32 or n=33, and for infinitely many larger values of n.
Moore's Law may eventually enable us to search n=32 and n=33 exhaustively. It is possible
that for most large values of n only the examples obtained from the known systematic
constructions may exist. N. Levanon and his students have looked at the ambiguity
functions of Costas arrays for continuous variations in time and frequency.
Sunday, June 14, 11:1512:15
Room: Schreiber 006
Julia Kempe Tel Aviv University
The Computational Complexity of Quantum Systems
In recent years, physicists and computer scientists have
gotten together in an effort to understand the complexity
of computing properties of quantum systems. In this talk
I will survey some of the recent progress in this
interdisciplinary research area. Our main focus will be
on quantum analogues of the CookLevin theorem, starting
with the most basic version, and ending with a surprising
version that is true only in the quantum world. I will
also describe the close connection this area of research
has to a model of quantum computation known as adiabatic quantum
computation. Finally, if time permits, I will describe
attempts to find quantum analogues of the celebrated PCP
theorem.
NOTE: This talk assumes no knowledge of quantum computation
Thursday, June 18, 14:3015:30
Room: Schreiber 006
Daphne Koller, Stanford University
Probabilistic Models for Holistic Scene Understanding
Over recent years, computer vision has made great strides towards annotating parts of an
image with symbolic labels, such as object categories or segment types. However, we are
still far from the ultimate
goal of providing a semantic description of an image, such as "a man, walking a dog on a
sidewalk, carrying a backpack". In this talk, I will describe some projects that use
probabilistic models in an attempt to
move us a little closer towards this goal.
The first part of the talk will present methods that use a more holistic scene analysis
to improve our performance at core tasks such as object detection, segmentation, or 3D
reconstruction. The second part of the talk will focus on finergrained modeling of
object shape, so as to allow us to annotate images with descriptive labels related
to the object shape, pose, or activity (e.g., is a cheetah running or standing). These
vision tasks rely on novel algorithms for core problems in machine learning and
probabilistic models, such as efficient algorithms for probabilistic correspondence,
transfer learning across related object classes for learning from sparse data, and more.
Daphne Koller is a Professor of Computer Science at Stanford
University. Her main research focus is in developing and using
machine learning and probabilistic methods to model and analyze
complex systems, and she is particularly interested in using these
techniques to understand biological systems and the world around us.
Professor Koller is the author of over 100 refereed publications,
which have appeared in venues that include Science, Nature Genetics,
and the Journal of Games and Economic Behavior. She is a Fellow of
the American Association for Artificial Intelligence, and has
received a number of awards, including the Sloan Foundation Faculty
Fellowship in 1996, the ONR Young Investigator Award in 1998, the
Presidential Early Career Award for Scientists and Engineers (PECASE)
from President Clinton in 1999, the IJCAI Computers and Thought Award
in 2001, the Cox Medal for excellence in fostering undergraduate
research at Stanford in 2003, the MacArthur Foundation Fellowship in
2004 and the firstever ACM/Infosys award in 2008.
Daphne Koller is a Professor of Computer Science at Stanford
University. Her main research focus is in developing and using
machine learning and probabilistic methods to model and analyze
complex systems, and she is particularly interested in using these
techniques to understand biological systems and the world around us.
Professor Koller is the author of over 100 refereed publications,
which have appeared in venues that include Science, Nature Genetics,
and the Journal of Games and Economic Behavior. She is a Fellow of
the American Association for Artificial Intelligence, and has
received a number of awards, including the Sloan Foundation Faculty
Fellowship in 1996, the ONR Young Investigator Award in 1998, the
Presidential Early Career Award for Scientists and Engineers (PECASE)
from President Clinton in 1999, the IJCAI Computers and Thought Award
in 2001, the Cox Medal for excellence in fostering undergraduate
research at Stanford in 2003, the MacArthur Foundation Fellowship in
2004 and the firstever ACM/Infosys award in 2008.
Sunday, June 21, 11:1512:15
Room: Schreiber 006
Igor Ulitsky, Blavatnik School of Computer Science, Tel Aviv University
Identification of biologically and clinically relevant functional modules through
integration of network and expression data
Most cellular functions rely on the coordinated action of the products of multiple genes,
often referred to as functional modules. A major goal of computational systems biology is
to enable the delineation of these modules and to facilitate extraction of biological
insights from the deduced modular structures. Genomewide mRNA expression profiles are
typically analyzed using clustering or differential expression analysis. In this talk, I
will show how our understanding of the cellular processes affected by transcriptional
changes can be significantly improved by integration of expression data with protein
interaction networks.
We developed several computational methods for identifying subnetworks with a
characteristic transcription pattern, such as coexpression or differential expression.
Importantly, our methods can handle cases where some of the subnetwork genes are not
regulated on the transcription level, and they are robust to noise in the protein
interaction network.
I will demonstrate the effectiveness of these methods on a variety of biological systems,
including the response of /S. cerevisiae/ to osmotic shock and DNA damage, human
embryonic development, breast cancer and Huntington's disease. Our results include
predictions of gene function, delineation of novel pathways and complexes. In the context
of human disease, the subnetworks we identify provide a signature of the disease
potentially useful for diagnosis, pinpoint possible pathways affected by the disease, and
suggest targets for drug intervention.
Joint work with Ron Shamir, FranzJosef Muller, Louise Laurent, Jeanne Loring and Richard
Karp
Sunday, July 5, 11:1512:15
Room: Schreiber 006
Tal LevAmi , Blavatnik School of Computer Science, Tel Aviv University
Automatic Maintenance of Transitive Properties with Applications
for Shape Analysis (Thesis defense talk)
The importance of software in today's world has made software
correctness more crucial than ever. Data structures are an essential
part of most software developed. To reason about correctness of code
using data structures, transitive properties are need to express
concepts such as paths in the program's memory graph. This is especially
true in imperative code where destructive pointer updates are used. The
analysis of programs manipulating linked data structures is called Shape
Analysis.
In this thesis we explore different techniques for reasoning about
transitive properties with applications to Shape Analysis.
In this talk, I will present two approaches explored in the thesis, one
using first order automated theorem provers and another based on
specializing shape analysis algorithms to specific data structures.
Sunday, July 12, 11:1512:15
Room: Schreiber 006
Michal OzeryFlato ,Tel Aviv University
Algorithmic problems in genome rearrangements: from evolution
to cancer
In genome rearrangement events, large segments of chromosomes are
relocated, deleted, or duplicated. Such events are relatively rare in
the evolution of species, and consequently, we can learn about the
evolution of species through their study. A major computational
problem in the analysis of genome rearrangements is the genomic
sorting problem, which seeks for a shortest sequence of rearrangement
events that transforms, or sorts, one genome into another. Genomic
sorting gives rise to a spectrum of fascinating algorithmic and
combinatorial problems, each defined by the set of allowed
rearrangements operations and by the representation of the genomes.
Reversals and translocations are common types of rearrangements
events. Reversals invert an interval of genes within a chromosome.
Translocations exchange nonempty ends between two chromosomes. The
problems of sorting by reversals (SBR) and sorting by translocations
(SBT) are instances of the genomic sorting problem confined to one
type of rearrangement events, either reversals (SBR), or
translocations (SBT). In this talk I will present a combinatorial
framework for the analysis of both problems, which allows the
adaptation of algorithms for solving SBR to SBT. In addition, I will
discuss our results on the problem of sorting by legal translocations,
which is a more biologically realistic variant of SBT.
Genome rearrangements are also prevalent in cancer. The genomes of
most cancer cells evolve rapidly by rearrangements. In the last part
of my talk, I will discuss the karyotype sorting problem, which is a
variant of the genomic sorting problem suited for the evolution of
cancer genomes.
Joint work with Ron Shamir
יום חמישי 22 בינואר 13:1515:00
חדר 309
ליאו קורי, מכון כהן, אוניברסיטת תלאביב
אֶמה להמר, דריק להמר, וSWAC:
כניסתה האיטית של תורת המספרים לעידן הדיגטלי
מהפיכת המחשוב האלקטרוני הגיעה באיחור ובאיטיות מעוררי תמיהה (ולעיתים לא הגיעו כלל) לתחומים רבים של המתמטיקה העיונית, ובהם תורת המספרים. מחד, רוב המתמטיקאים הפעילים בזרמים המרכזיים של המתמטיקה העיונית גילו עניין מועט באפשרויות שהטכנולוגיה החדשה  שהתפתחה לאחר מלחמת העולם השנייה  הציעה למען נושאי המחקר שלהם. מאידך, המוסדות שהפעילו את המכונות הראשונות ניצבו מול הצורך להצדיק את עלויות הפיתוח והתפעול הגבוהות למדי שלהן ע"י עיסוקים מעשיים יותר ובעלי פוטנציאל לתרומה מוחשית יותר מאלה שענפי המתמטיקה העיונית גלמו בדימוי הציבורי והעצמי שלהם. למרות זאת, כמה בעיות קלאסיות הקשורות בתורת המספרים התגלו בהדרגה כאתגר חשוב בפני יכולת החישוב של המכונות החדשות ובפני כשרון התכנות של מפעיליהן הראשונים.
בהרצאתי אנתח את הכוחות המנוגדים שפעלו בתהליך השתלבותה של תורת המספרים בעידן האלקטרוני. כוחות אלה כללו אידיאלים ודימוים מגובשים של המחקר המתמטי, תהליכי שינוי ארוכי טווח, נסיבות היסטוריות מקומיות, מסלולים מקצועיים של אישים ושל מוסדות, מגבלות והזדמנויות טכנולוגיות, ועוד. שתי דמויות מעט נשכחות, אשר שחקו תפקיד ראשון במעלה בקידום התהליך הזה, יזכו לתשומת לב מיוחדת בהרצאה: אמה ודריק להמר (Emma and Derrick Henry Lehmer), זוג מתמטיקאים שפעלו שנים רבות באוניברסיטת ברקליי. אלמלא גישתם המקורית למחקר בתורת המספרים, ואלמלא הנסיבות המיוחדות שעיצבו את הקריירות שלהם, העידן האלקטרוני היה מגיע לתחום הזה, ככל הנראה, באיחור ובאיטיות רבים עוד יותר.
