Blackboard, Winslow Homer (1877)

Theory of Computation Seminar

We meet every Tuesday at Schreiber 309.
Light lunch is served from 12:45, and the talk starts at 13:10 and lasts 1-2 hours (with a 10 minute break).

The seminar also has a mailing list. In order to join or leave the list, fill the form on this page, or contact Dean Doron, the seminar coordinator.

Schedule

Archive of Previous Talks

Winter 2018-2019

  • January 8, 2019 Aviad Rubinstein (Stanford) Communication complexity of mixed Nash equilibrium in potential games We prove that for two-player $N\times N$ potential games the communication complexity of Nash equilibrium (possibly mixed) is at least $\Omega(N^{1/8})$. To the best of our knowledge, this result is the first to demonstrate hardness of Nash equilibrium (possibly mixed) in potential games. Joint work with Yakov Babichenko
  • January 1, 2019 Omri Weinstein (Columbia) Static Data Structure Lower Bounds Imply Rigidity. We show that static data structure lower bounds in the group (linear) model imply semi-explicit lower bounds on matrix rigidity. In particular, we prove that an explicit lower bound of t ≥ ω(log^2 n) on the cell-probe complexity of linear data structures in the group model, even against arbitrarily small linear space (s = (1+ε)n), would already imply a semi-explicit (P^NP) construction of rigid matrices with significantly better parameters than the current state of art (Alon, Panigrahy, and Yekhanin, 2009). Our result further asserts that polynomial (t ≥ n^δ) data structure lower bounds against near-optimal space, would imply super-linear circuit lower bounds for log-depth linear circuits (which would close a four-decade open question). In the succinct space regime (s = n+o(n)), we show that any improvement on current cell-probe lower bounds in the linear model would also imply new rigidity bounds. Our main result relies on a new connection between the “inner” and “outer” dimensions of a matrix (Paturi and Pudlak, 2006), and on a new worst-to-average case reduction for rigidity, which is of independent interest. Joint work with Zeev Dvir (Princeton) and Alexander Golovnev (Harvard).
  • December 25, 2018 Yael Kalai (MSR and MIT) Efficient Multi-Party Interactive Coding In the field of interactive coding two or more parties wish to carry out a distributed computation over a communication network that may be noisy. The ultimate goal is to develop efficient coding schemes that can tolerate a high level of noise while increasing the communication by only a constant factor (i.e., constant rate). We provide *efficient*, constant rate schemes in the synchronous communication model, over an arbitrary topology. Our scheme is resilient to adversarial *insertion/deletion* noise, where the channel can adversarially alter the content of any transmitted symbol, as well as completely remove a transmitted symbol or inject a new symbol into the channel. We allow the adversary to corrupt at most e/m fraction of the total communication, where m is the number of links in the network, and e is a small constant. This scheme assumes the parties share a random string to which the adversarial noise is oblivious. We can remove this assumption at the price of being resilient to e/(m log m) adversarial error. This is joint work with Ran Gelles and Govind Ramnarayan.
  • December 11, 2018 Avishay Tal (Stanford / Simons-Berkeley) Oracle Separation of BQP and the Polynomial Hierarchy In their seminal paper, Bennett, Bernstein, Brassard, and Vazirani [SICOMP, 1997] showed that relative to an oracle, quantum algorithms are unable to solve NP-complete problems in sub-exponential time (i.e., that Grover's search is optimal in this setting). In this work, we show a strong converse to their result. Namely, we show that, relative to an oracle, there exist computational tasks that can be solved efficiently by a quantum algorithm, but require exponential time for any algorithm in the polynomial hierarchy. (The polynomial hierarchy is a hierarchy of complexity classes that captures P, NP, coNP, and their generalizations.) The tasks that exhibit this "quantum advantage" arise from a pseudo-randomness approach initiated by Aaronson [STOC, 2010]. Our core technical result is constructing a distribution over Boolean strings that "look random" to constant-depth circuits of quasi-polynomial size, but can be distinguished from the uniform distribution by very efficient quantum algorithms. Joint work with Ran Raz.
  • December 4, 2018 Ronen Eldan (Weizmann) When is a random variable on the Boolean hypercube roughly equivalent to a mixture of variables with independent coordinates? We will review some older and some recent results regarding entropy-efficient decompositions of measures on the Boolean hypercube into "nicer" measures. Specifically, given a measure \mu on \{0,1\}^n we rougly want to be able to write \mu = \sum_{i \in N} \mu_i with N as small as possible and \mu_i being close, in some sense to a product measure for most i. This type of question has applications in several fields: 1. Random graphs and large deviations, 2. Statistical mechanics - interacting particle systems, 3. Rounding of Semidefinite programs. In particular, we will talk about a specific method of proof which relies on ebdedding of the measure to the space of paths of a Brownian motion, and see how this can be useful with the problem (as well as other aspects of the analysis of Boolean measures).
  • November 27, 2018 Nicolas Resch (Carnegie Mellon University) Lossless dimension expanders via linearized polynomials and subspace designs For a vector space F^n over a field F, an (η, ß)-dimension expander of degree d is a collection of d linear maps Γ_j : F^n \to F^n such that for every subspace U of F^n of dimension at most ηn, the image of U under all the maps, ∑_{j=1}^d Γ_j(U), has dimension at least ßdim(U). Over a finite field, a random collection of d=O(1) maps Γ_j over excellent “lossless” expansion with high probability: ß ≈ d for η ≥ Ω(1/\eta). When it comes to a family of explicit constructions (for growing n), however, achieving even expansion factor β = 1 + ε with constant degree is a non-trivial goal. We present an explicit construction of dimension expanders over finite fields based on linearized polynomials and subspace designs, drawing inspiration from recent progress on list decoding in the rank-metric. Our approach yields the following: Lossless expansion over large fields; more precisely ß ≥ (1–ε)d and η ≥ (1–ε)/d with d=O_ε(1), when |F| ≥ Ω(n). Optimal up to constant factors expansion over fields of arbitrarily small polynomial size; more precisely ß ≥ Ω(δd) and η ≥ Ω(1/(δd)) with d = O_δ(1), when |F| ≥ n^δ. Previously, an approach reducing to monotone expanders (a form of vertex expansion that is highly non-trivial to establish) gave (Ω(1), 1+Ω(1))-dimension expanders of constant degree over all fields. An approach based on “rank condensing via subspace designs” led to dimension expanders with ß ≥ Ω(√d) over large finite fields. Ours is the first construction to achieve lossless dimension expansion, or even expansion proportional to the degree. Based on joint work with Venkatesan Guruswami and Chaoping Xing.
  • November 20, 2018 Chin Ho Lee (Northeastern University) Bounded Independence versus Symmetric Tests For a subset T of {0, 1}^n, define k*(T) be the maximum k such that there exists a k-wise independent distribution supported on T. I will outline two approaches to proving lower bounds on k*(T) for symmetric subsets, and talk about how to obtain tight bounds on k*(T) for the subsets MOD(m,c) := { x in {0,1}^n : wt(x) = c mod m }, and THRESHOLD(t) := { x in {0,1}^n : | wt(x) - n/2 |
  • November 13, 2018 Ori Parzanchevski (Hebrew University of Jerusalem) Powering of High Dimensional Expanders Powering the adjacency matrix of an expander graph results in a better expander of higher degree. High dimensional expanders are simplicial complexes which generalize the notion of expanders. In these settings, we look for an analogue of the powering operation. We show that the naive approach to powering does not yield high dimensional expanders in general, but that for a specific family of HD expanders, there is a powering operation coming from so-called “geodesic walks”, which does turn HD expanders to ones with higher degree. As an application, we use our powering operation to obtain new efficient two-layer samplers. Based on joint work with Tali Kaufman.
  • November 6, 2018 Elazar Goldenberg (The Academic College of Tel Aviv-Yaffo) Approximating Edit Distance Within Constant Factor in Truly Sub-Quadratic Time Edit distance is a measure of similarity of two strings based on the minimum number of character insertions, deletions, and substitutions required to transform one string into the other. The edit distance can be computed exactly using a dynamic programming algorithm that runs in quadratic time. Andoni, Krauthgamer and Onak (2010) gave a nearly linear time algorithm that approximates edit distance within approximation factor poly(log n). In this talk, I'll present a recent result in which we provide an algorithm whose running time is O(n^ {2−2/7} )that approximates the edit distance within a constant factor. Joint work with Diptarka Chakraborty, Debarati Das , Michal Koucký and Michael Saks
  • October 23, 2018 Karthik C.S. (Weizmann) On Complexity of Closest Pair Problem Given a set of points in a metric space, the Closest Pair problem asks to find a pair of distinct points in the set with the smallest distance. In this talk, we address the fine-grained complexity of this problem which has been of recent interest. At the heart of all our proofs is the construction of a dense bipartite graph with special embedding properties and are inspired by the construction of locally dense codes. The talk is based on a joint work with Pasin Manurangsi.
Loading...