Algorithms Seminar, Tel Aviv University

Algorithms Seminar 

School of Computer Science
Tel Aviv University




We meet every Monday at 11:10 in Schreiber 309. Talks usually last 60 minutes. For more information, or to join the mailing list, please send an email to Yossi Azar or Yishay Mansour or Uri Zwick.
To see the abstract of a talk, click on [+].

Schedule for 2010/2011

Date Speaker Affiliation Title
2011/02/21Uri Zwick Tel Aviv University[+] Subexponential lower bounds for randomized pivoting rules for the simplex algorithm
The simplex algorithm is among the most widely used algorithms for solving linear programs in practice. Most deterministic pivoting rules are known, however, to require an exponential number of steps to solve some linear programs. No non-polynomial lower bounds were known, prior tothis work, for randomized pivoting rules. We provide the first subexponential (i.e., of the form $2^{\Omega(n^\alpha)}$, for some $\alpha>0$) lower bounds for the two most natural, and most studied, randomized pivoting rules suggested to date.

The first randomized pivoting rule we consider is Random-Edge, which among all improving pivoting steps (or edges) from the current basic feasible solution (or vertex) chooses one uniformly at random. The second randomized pivoting rule we consider is Random-Facet, a more complicated randomized pivoting rule suggested by Kalai (1992) and Matou{\v{s}}ek, Sharir and Welzl (1996).
Our lower bound for the Random-Facet pivoting rule essentially matches the subexponential upper bounds of Kalai and Matou{\v{s}}ek et al. Lower bounds for Random-Edge and Random-Facet were known before only in abstract settings, and not for concrete linear programs.

Our lower bounds are obtained by utilizing connections between pivoting steps performed by simplex-based algorithms and improving switches performed by policy iteration algorithms for 1-player and 2-player games. We start by building 2-player parity games (PGs) on which suitable randomized policy iteration algorithms perform a subexponential number of iterations. We then transform these 2-player games into 1-player Markov Decision Processes (MDPs) which correspond almost immediately to concrete linear programs.

Joint work with Thomas Dueholm Hansen and Oliver Friedmann
2011/02/28Eran Tromer Tel Aviv University[+] Monitoring Network Quality using Streaming Algorithms
Network traffic across the Internet may be lost or corrupted due to overload, errors, or malicious behavior. Effective monitoring of the fault frequency along given paths is needed for driving routing decisions and for detecting violations of Service Level Agreements. However, existing measurement tools, like ping, traceroute, and trajectory sampling, are vulnerable to attacks that can make a network path look better than it really is.

We present a new path-quality monitoring protocols that reliably raise an alarm when the packet-loss rate and delay exceed a threshold, even when an adversary tries to bias monitoring results by selectively delaying, dropping, modifying, injecting, or preferentially treating packets. To this end, we employ streaming algorithms for norm estimation. Using these algorithms, the sender and receiver construct compressed "sketches" of network traffic, and then compare the sketches to deduce the approximate path quality. We show how the efficiency of modern streaming algorithms facilitates a very practical implementation on network routers. We also prove improved bounds on the performance of known algorithms in our setting.

Joint work with Boaz Barak, Sharon Goldberg, Jennifer Rexford and David Xiao.
2011/03/07Andrea Campagna IT University of Copenhagen[+] On sampling implicitly defined multisets
The increase in data size raises some new problems, from analgorithmic perspective, when manipulating such huge inputs.
An algorithm that has to carry on a computation on a huge data set, will hardly be allowed to use resources that are larger than quasi linear.
Every polynomial option implies an explosion in the size of the data to manage, that is usually unbearable from a spatial and temporal standpoint.
It is worth noticing how this explosion concerns some artefacts of the computation which can be just temporary data, necessary to compute the output. Often these transient data are candidates for being output, or just objects needed to get what the algorithm wants.
In this talk we will discuss how to produce samples from these artefacts, without actually generating the artefacts. The ephemeral data themselves, remain only implicitly defined and never completely produced.
In order to carry on this task, we develop a handful of sampling techniques, exploiting some measure, usually bound to the characteristics of the output.
As an example we can think of the union of Cartesian products, from which we are interested in retrieving only a relation. The relation members are defined according to a measure depending on the multiset output by the Cartesian products. Producing the whole multiset could be a way too expensive task, in terms of both time and space. However, one could be able to exploit the characteristics and the structure of the measure itself, in order to get a sampling technique that allows to sample only a small sub multiset of elements, that are likely to be interesting. In this way, the whole multiset remains only implicitly defined. This means that we can avoid to spend too much time and space on elements that are not important for computing the output.
Using this approach, for a range of distinct problems, we are able to achieve good running times and a small space usage.
We will address problems connected with data mining, with a spotlight on graph mining and size estimation of sparse boolean matrix multiplication.
2011/03/14Michael Dinitz Weizmann Institute[+] Directed Spanners via Flow-Based Linear Programs
A k-spanner of a given graph is a subgraph that preserves alldistances within factor k. This notion is useful in several contexts, from distributed computing to property testing. By examining spanners through flow-based linear programming relaxations, we design an O(n^{2/3})-approximation algorithm for the directed k-spanner problem that works for all k. This is the first sublinear approximation for arbitrary edge-lengths. We also design a different rounding scheme with a better approximation ratio for the special case of k=3 and unit edge lengths. Our algorithms easily extend to the fault-tolerant setting, which has recently attracted attention but not from an approximation viewpoint. A virtue of all of our algorithms is that they are relatively simple.

Time permitting, I will describe followup work that further develops the direction of fault-tolerant spanners.

Joint work with Robert Krauthgamer.
2011/03/21Liam Roditty Bar Ilan University[+] The Minimum Weight Cycle Problem
We consider the fundamental algorithmic problem of finding a cycle ofminimum weight in a graph of nonnegative edge weights.
Extending the seminal work of Itai and Rodeh [SIAM J. Computing 1978 and STOC'77] on minimum cycle in unweighted graphs, we show that the minimum weight cycle problem in a (directed or undirected) $n$-node graph with weights in $\{1,\ldots, M\}$ can be efficiently reduced to finding a minimum weight {\em triangle} in an $\Theta(n)-$node graph with weights in $\{1,\ldots,O(M)\}$.
We also show that given a weighted undirected graph $G(V,E,w)$, with a minimum cycle of weight $t^*$ whose maximal edge weight is $w^*$ it is possible to efficiently compute the following approximations: 1. For integral weights from the range $[1,M]$ we report a cycle of weight at most $\frac{4}{3}t^*$ in $O(n2\log n(\log n + \log M))$ time.
2. For integral weights from the range $[1,M]$ we report a cycle of weight at most $t^* + w^*$ in $O(n2\log n(\log n + \log M))$ time.
3. For non-negative real edge weights and for any $\eps>0$ we report a cycle of weight at most $(\frac{4}{3}+\eps)t^*$ in $O(\frac{1}{\eps}n2\log n(\log \log n))$ time.

The talk is based on joint works with Roei Tov and with Virginia Vassilevska Williams.
2011/03/28Lee-Ad Gottlieb Hebrew University[+] Efficient Classification for Metric Data
Recent advances in large-margin classification of data residing in general metric spaces (rather than Hilbert spaces) enable classification under various natural metrics, such as edit and earthmover distance. The general framework developed for this purpose by von Luxburg and Bousquet [JMLR, 2004] left open the question of computational efficiency and providing direct bounds on classification error. We design a new algorithm for classification in general metric spaces, whose runtime and accuracy depend on the doubling dimension of the data points. It thus achieves superior classification performance in many common scenarios. The algorithmic core of our approach is an approximate (rather than exact) solution to the classical problems of Lipschitz extension and of Nearest Neighbor Search. The algorithms generalization performance is established via the fat-shattering dimension of Lipschitz classifiers.

This is joint work with Aryeh Kontorovich and Robert Krauthgamer.
2011/04/04Gilad Tsur Tel Aviv University[+] Testing Properties of Sparse Images
We initiate the study of testing properties of images that correspondto sparse 0/1-valued matrices of size n x n. Our study is related to but different from the study initiated by Raskhodnikova (Proceedings of RANDOM, 2003), where the images correspond to dense 0/1-valued matrices. Specifically, while distance between images in the model studied by Raskhodnikova is the fraction of entries on which the images differ taken with respect to all n^2 entries, the distance measure in our model is defined by the fraction of such entries taken with respect to the actual number of 1's in the matrix. We study several natural properties: connectivity, convexity, monotonicity, and being a line. In all cases we give testing algorithms with sublinear complexity, and in some of the cases we also provide corresponding lower bounds.

Joint work with Dana Ron.
2011/04/11Ran Roth Tel Aviv University[+] Submodular Max-SAT
We introduce the submodular Max-SAT problem. This problem is anatural generalization of the classical Max-SAT problem in which the additive objective function is replaced by a submodular one.
We develop a randomized linear-time $2/3$-approximation algorithm for the problem. Our algorithm is applicable even for the online variant of the problem. We also establish hardness results for both the online and offline settings. Notably, for the online setting, the hardness result proves that our algorithm is best possible, while for the offline setting, the hardness result establishes a computational separation between the classical Max-SAT and the submodular Max-SAT.

Joint work with Yossi Azar and Iftah Gamzu
2011/05/02Yaron Singer UC Berkeley[+] How to win Friends and Influence People, Truthfully
Throughout the past decade there has been extensive research on algorithmic and data mining techniques for solving the problem of influence maximization in social networks: if one can convince a subset of individuals to influence their friends to adopt a new product or technology, which subset should be selected so that the spread of influence in the social network is maximized?
Despite the progress in modeling and techniques, the incomplete information aspect of problem has been largely overlooked. While the network structure is often available, the inherent cost individuals have for influencing friends is difficult to extract.

In this talk we will discuss mechanisms that extract individuals' costs in well studied models of social network influence. We follow the mechanism design framework which advocates for truthful mechanisms that use allocation and payment schemes that incentivize individuals to report their true information. Beyond their provable theoretical guarantees, the mechanisms work well in practice. To show this we will use results from experiments performed on the mechanical turk platform and social network data that provide experimental evidence of the mechanisms' effectiveness.
2011/05/16Kshipra Uday Bhawalkar Stanford[+] Welfare Guarantees for Combinatorial Auctions with Item Bidding
We analyze the price of anarchy (POA) in a simpleand practical non-truthful combinatorial auction when players have subadditive valuations for goods. We study the mechanism that sells every good in parallel with separate second-price auctions. We first prove that under a standard “no overbidding” assumption, for every subadditive valuation profile, every pure Nash equilibrium has welfare at least 50% of optimal — i.e., the POA is at most 2. For the incomplete information setting, we prove that the POA with respect to Bayes- Nash equilibria is strictly larger than 2 — an unusual separation from the full-information model — and is at most 2 lnm, where m is the number of goods.

Joint work with Tim Roughgarden.
2011/05/30Ilan Cohen Tel Aviv University[+] Prompt Mechanisms for Bounded Capacity Auctions
We study the problem of designing truthful mechanisms for periodicbounded capacity auctions. A bounded capacity auction is a single-item auction in which the number of participating bidders is bounded by a fixed constant. We observe that a simple greedy algorithm can underlie an optimal truthful mechanism for this problem. However, we observe that a buyer that wins an item may not know her payment when she wins.
In fact, there are cases in which a winning buyer may have to wait indefinitely to learn her payment. Hence, we focus on developing prompt mechanisms in which a winning buyer learns her payment when she wins the auction.

We develop a randomized prompt truthful mechanism that achieves a competitive ratio of $1.701 + \epsilon$. An important ingredient in analyzing the mechanism is a study of a natural stochastic process which may be of independent interest. In this process, we have a fixed number of bins and a stream of balls that arrive in an online fashion. Once a ball arrives, it is stored in one of the unoccupied bins; if all the bins are occupied then the ball is thrown away. In each time step, we select a bin uniformly at random, clear it, and gain its content. We analyze the gain of this process with respect to a process that clears a ball ,if exists in any bin, at each time step, and prove that it is worse by no more than a factor of $1.701 + \epsilon$ away.
One crucial part in our analysis is to bound the gain difference between this randomized process and a carefully designed fractional process. This is achieved by applying Azuma's inequality to a super-martingale process that is defined with respect to the two previously-mentioned processes.

Joint work with Yossi Azar and Iftah Gamzu
2011/06/06Michal Rosen Tel Aviv University[+] A Near-Optimal Sublinear-Time Algorithm for Approximating the Minimum Vertex Cover Size
We give a nearly optimal sublinear-time algorithm for approximating the sizeof a minimum vertex cover in a graph G. The algorithm may query the degree deg(v) of any vertex v of its choice, and for each 1 <= i <= deg(v), it may ask for the i-th neighbor of v. Letting VC_opt(G) denote the minimum size of vertex cover in G, the algorithm outputs, with high constant success probability, an estimate VC_a(G) such that VC_opt(G) <= VC(G)_a <= 2VC_opt(G) + eps*n, where eps is a given additive approximation parameter. We refer to such an estimate as a (2,eps)-estimate.
The query complexity and running time of the algorithm are \tilde{O} ( D poly(1/eps)), where D denotes the average vertex degree in the graph. The best previously known sublinear algorithm, of Yoshida et al. (STOC 2009 ), has query complexity and running time O(d^4/eps^2), where d is the maximum degree in the graph.
Given the lower bound of \Omega(D) (for constant eps) for obtaining such an estimate (with any constant multiplicative factor) due to Parnas and Ron (TCS 2007 ), our result is nearly optimal.
In the case that the graph is dense, that is, the number of edges is Theta(n^2), we consider another model, in which the algorithm may ask, for any pair of vertices u and v, whether there is an edge between u and v. We show how to adapt the algorithm that uses neighbor queries to this model and obtain an algorithm that outputs a (2,eps)-estimate of the size of a minimum vertex cover whose query complexity and running time are \tilde{O}(n) poly(1/eps).

Joint work with Krzysztof Onak, Dana Ron and Ronitt Rubinfeld.
2011/06/13Ety Haitsin Tel Aviv University[+] Prompt Mechanism for Ads Placement Over Time
Consider video ads placement in commercial breaks in the television channel. Ads arrive online over time and each has an expiration date. Commercial breaks are typically of the same duration however the video ads may have an arbitrary size. Each ad has a private value and should be posted at most once in some break by its expire date. The player gets hervalue if her ad is broadcasted by the ad's expiration day (obviously after ad's arrival day), and otherwise has zero value. Arranging the ads into the commercial breaks while maximizing the players' profit is a classical problem of ads placement subject to the capacity constraint that should be solved truthfully. However, we are interested not only in truthfulness but also in prompt mechanism where the payment is determined for an agent when her ad is broadcasted. The promptness of the mechanism is the crucial requirement from our algorithm, as it allows to conduct a payment process without creating any redundant relation between an auctioneer and players. An inability to solve this problem can eventually prevent an application of such mechanisms in real life.
We design a $6$ approximation prompt mechanism for the problem. Previously Cole et al considered the special case where all ads have the same size which is equal to the break size. They achieved $2$ approximation prompt mechanism. The general case of ads with arbitrary size is considerably more involved and requires designing a new algorithm which we call the Gravity Algorithm.

Joint work with Yossi Azar


Previous semesters



Date Speaker Affiliation Title
2010/10/18Alex Lopez-Ortiz University of Waterloo[+] Parameterized Analysis of On-line Algorithms
The competitive ratio has served as a practical measure for the studyand classification of online algorithms. Notwithstanding the wide applicability of competitive analysis, it has been observed by numerous researchers that in certain settings the competitive ratio produces results that are too pessimistic or do not fully reflect practical experiences.
In this talk we consider a natural generalization of parameterized analysis to online algorithms and observe that it leads to proper separation between various known paging algorithms for paging and list update.
2010/10/25Andrew Golgberg Microsoft Research, Silicon Valley[+] PHAST: Hardware-Accelerated Shortest Path Trees
We present a novel algorithm to solve the nonnegative single-sourceshortest path problem on road networks. After a quick preprocessing phase, we can compute all distances from a given source in the graph with essentially a linear sweep over all vertices. BecThe university of Sheker Kolshehuause this sweep is independent of the source, we are able to reorder vertices in advance to exploit locality. Moreover, our algorithm takes advantage of features of modern CPU architectures, such as SSE instructions and multiple cores.
Compared to Dijkstra's algorithm, our method makes fewer operations, has better locality, and is better able to exploit parallelism at multi-core and instruction levels. We gain additional speedup when implementing our algorithm on a GPU, where our algorithm is up to three orders of magnitude faster than Dijkstra's algorithm on a high-end CPU. This makes applications based on all-pairs shortest-paths practical for continental-sized road networks. The applications include, for example, computing graph diameter, exact arc flags, and centrality measures Uzi Vishkinsuch as exact reaches or betweenness.


Joint work with Daniel Delling, Andreas Nowatzyk, and Renato Werneck
2010/11/01Alex Fabrikant Princeton University[+] Internet routing convergence: understanding BGP dynamics
The Border Gateway Protocol (BGP) is the de facto standard for Internetinter-domain routing -- it's the "glue" that holds the modern Internet together by allowing ISPs to dynamically adapt to changes happening throughout the network. Any aberrant behavior in BGP can trigger significant global-scale disruptions, as evidenced by many recent incidents. In this talk, I will address the worst-case convergence of BGP dynamics, vital for understanding future Internet scaling: can we always predict whether the network will converge to a stable routing, and how long might it take to converge?

Underneath BGP's implementation details lie dynamic interactions primarily driven by ISPs' rankings of possible routes, with the ISPs intentionally given significant flexibility in designing the rankings. I will show a fundamental problem with this flexibility: with it, inter-domain routing becomes a computationally universal dynamic system. Even with a full view of the network, checking whether convergence is guaranteed is PSPACE-complete. This provides a devastatingly negative answer to the 10-year-old open problem of checking BGP safety.


I will then impose some realistic constraints that can ensure convergence, and address the natural question of how fast BGP converges. Convergence is intrinsically linked to timers in the protocol, and we consider the risks of the ongoing proposals to "improve" the default BGP timers. With the necessarily incremental deployment of such changes, we show that, counter-intuitively, BGP convergence behavior can exponentially worsen in various senses.


Lastly, I will briefly show a more realistic BGP model that addresses previously unmodeled internal details, and, despite the extra realism, renders convergence checking much more feasible, especially under a novel light restriction on ranking functions, which leaves enough flexibility to be a plausible ISP policy recommendation. Furthermore, checking convergence in the new model is not only tractable, but even solvable with a distributed algorithm, with robust behavior under incomplete information about the ISPs' private policies.


The talk is based on joint work with Christos Papadimitriou, Umar Syed, Martin Suchara, and Jennifer Rexford.
2010/11/08Asaph Arnon Tel Aviv University[+] Repeated Budgeted Second Price Ad Auction
We study a setting where agents bid repeatedly for multiple identical items (such as impressions in an ad auction). The agents are limited by a budgetand have a value for each item. They submit to the auction a bid and a budget. Then, the items are sold by a sequential second price auction, where in each auction the highest bid wins and pays the second price. The main influence of the budget is that once an agent exhausts its budget it does not participate in future auctions. We model this sequential auction as a one-shot budget auction.


Our main contribution is to study the resulting equilibria in such a budget auction. We show that submitting the true budget is a dominating strategy while bidding the true value is not a dominating strategy. The existence of pure Nash equilibria depends on our assumptions on the bidding strategy of losing agents (agents not receiving any items). We show that if losing agents are restricted to bid their value then there are cases where there is no pure Nash equilibria, however, if losing agents can bid between the minimum price and their value then there is always a pure Nash equilibria.


We also study simple dynamics of repeated budget auctions, showing their convergence to a Nash equilibrium for the case of two agents and for the case of multiple agents with identical budgets.
2010/11/15Niv Buchbinder The Open University[+] Secretary Problems via Linear Programming
In the classical secretary problem an employer would like to choose the best candidate among $n$ competing candidates that arrive in a random order.

This basic concept of $n$ elements arriving in a random order and irrevocable decisions made by an algorithm have been explored extensively over the years, and used for modeling the behavior of many processes.
Our main contribution is a new linear programming technique that we introduce as a tool for obtaining and analyzing mechanisms for the secretary problem and its variants. Capturing the set of mechanisms as a linear polytope holds the following immediate advantages.


1. Computing the optimal mechanism reduces to solving a linear program.
2. Proving an upper bound on the performance of any mechanism reduces to finding a feasible solution to the dual program.
3. Exploring variants of the problem is as simple as adding new constraints, or manipulating the objective function of the linear program.


We demonstrate the applicability of these ideas in several settings including online auctions.


This is a joint work with Kamal Jain and Mohit Singh.
2010/11/22Dmitry Sotnikov Tel Aviv University[+] All-Pairs Shortest Paths in O(n^2) Expected Time
We present an all-pairs shortest paths algorithm whose expected running time on a completedirected graph on n vertices whose edge weights are chosen independently and uniformly at random from [0, 1] is O(n^2). This resolves a long standing open problem. The algorithm is a variant of the dynamic all-pairs shortest paths algorithm of Demetrescu and Italiano. The analysis relies on a proof that the expected number of locally shortest paths in such randomly weighted graphs is O(n^2). We also present a dynamic version of the algorithm that recomputes all shortest paths after a random edge update in O(log^2 n) expected time.


Joint work with Yuval Peres, Benny Sudakov and Uri Zwick
2010/11/29Piotr Sankowski University of Warsaw[+] Approximation Algorithms for Union and Intersection Covering Problems
In a classical covering problem, we are given a set of requests that we need to satisfy (fully or partially), by buying a subset of items at minimum cost. For example, in the k-MST problem we want to find the cheapest tree spanning at least k nodes of an edge-weighted graph. Here nodes and edges represent requests and items, respectively.

In this paper, we initiate the study of a new family of multi-layer covering problems. Each such problem consists of a collection of h distinct instances of a standard covering problem (layers), with the constraint that all layers share the same set of requests. We identify two main subfamilies of these problems: - in a union multi-layer problem, a request is satisfied if it is satisfied in at least one layer; - in an intersection multi-layer problem, a request is satisfied if it is satisfied in all layers.


To see some natural applications, consider both generalizations of k-MST. Union k-MST can model a problem where we are asked to connect a set of users to at least one of two communication networks, e.g., a wireless and a wired network. On the other hand, intersection k-MST can formalize the problem of connecting a subset of users to both electricity and water.


We present a number of hardness and approximation results for union and intersection versions of several standard optimization problems: MST, Steiner tree, set cover, facility location, TSP, and their partial covering variants.


Joint work with: Marek Cygan, Fabrizio Grandoni, Stefano Leonardi, Marcin Mucha and Marcin Pilipczuk
2010/12/06Kobbi Nissim Microsoft Research and Ben-Gurion University[+] Approximately Optimal Mechanism Design via Differential Privacy
We show a generic construction of mechanisms that allows for approximate optimal implementation in dominant strategies of social welfare functions, without utility transfer, given that the social welfare function has bounded sensitivity to agent types. We demonstrate the applicability of our construction with respect to pricing and facility location. The mechanisms we design are optimal up to an additive factor of the order of magnitude of the square root of the optimum.


The construction is by mixing between two mechanisms - with high probability we actuate a random mechanism that reduces players influence on the choice of the social alternative, while choosing the optimal outcome with high probability. With the complementary probability we actuate a mechanism that is sub-optimal yet provides strong incentives to disclose the personal information.


Joint work with Rann Smorodinsky and Moshe Tennenholtz.
2010/12/13Eric Price MIT[+] Efficient linear sketches for the set query problem
Linear sparse recovery is the problem of approximating a vector x from alow-dimensional linear sketch Ax, where A is an m x n matrix with m << n. Since A has non-trivial nullspace, one cannot hope to recover all x exactly. Instead we require that, if x is "close" to being "good", the recovered vector is "close" to x. An optimal solution has been known when "close" uses l_1 or l_2 distance and "good" means being k-sparse, but not for other definitions of "close" and "good".


We give a framework for sparse recovery where the problem is split into two parts. In the first part we locate the heavy coefficients of x, and in the second part we estimate their values (termed the _set query problem_). We give an optimal algorithm for the set query problem, leaving only the first part for problem-specific development. We then give improved algorithms for locating the heavy hitters in (1) power law distributions, (2) block-sparse vectors, and (3) range trees. This last allows us to perform sparse recovery when "close" uses the Earth Mover's Distance as a metric.


Most of this work will appear at SODA 2011 and is available at http://arxiv.org/abs/1007.1253
2010/12/20Eli Gafni UCLA[+] Oblivious Collaboration
Communication is a crucial ingredient in every kind of collaborative work.But what is the least possible amount of communication required for a given task? We formalize this question by introducing a new framework for distributed computation, called oblivious protocols. Communication in our framework is modeled by an adversarial scheduler.
We investigate the power of this model by considering two concrete examples, the musical chairs task MC(n,m) and the well-known Renaming problem. The MC(n,m) game is played by n players (processors) with m chairs. Players can occupy chairs, and the game terminates as soon as each player occupies a unique chair. Thus we say that player P is in conflict if some other player Q is occupying the same chair, i.e., termination means there are no conflicts. By known results from distributed computing, if m ? 2n - 2, no strategy of the players can guarantee termination. However, there is a protocol with m = 2n - 1 chairs that always terminates. Here we consider an oblivious protocol where in every time step the only communication is this: the scheduler chooses an arbitrary nonempty set of players, and for each of them provides only one bit of information, specifying whether the player is currently in conflict or not. A player notified not to be in conflict halts and never changes its chair, whereas a player notified to be in conflict changes its chair according to its deterministic program. Remarkably, even with this minimal communication termination can be guaranteed with only m = 2n - 1 chairs. This allows us to give an oblivious protocol for the Renaming problem that is optimal.
Other aspects suggest themselves, such as the efficiency (program length) of our protocol. We make substantial progress here as well, though many interesting questions remain open.


Joint work with: Yehuda Afek (TAU), Yakov Babichenko (HUJI), Uriel Feige(Weizmann), Nati Linial (HUJI), and Benny Sudakov (UCLA)
2010/12/27Igor Razgon University of Leicester[+] parameterized complexity of graph separation problems: current results and further challenges
Consider an NP-hard optimization problem with input size $n$ which isassociated with a parameter $k$ (the parameter may be, for instance, the maximum allowed output size). We say that this problem is fixed-parameter tractable (FPT) if it can be solved by a fixed-parameter algorithm, i.e. an algorithm whose runtime is uniformly polynomial in $n$, though exponential (or even superexponential) in $k$. Examples of such runtimes are $O((2^k)*n)$, $O(3^k+n2)$ and so on. When $k$ is small, the hope is that the respective dependence on $k$ is also small. In this case, an NP-hard optimization problem can be solved in a low polynomial time. Thus, if the considered parameter is reasonably small in practical applications, fixed-parameter algorithms can be used as a method of coping with NP-hardness, both precise (unlike approximation algorithms) and efficient (unlike ordinary brute-force algorithms).


Graph separation problems comprise a large class of problems where the objective is to remove the smallest set of vertices (or edges) from the given graph so that certain structures in the graph are broken. Examples of such structures: paths between certain pairs of vertices, directed cycles, odd cycles, etc. In real-world applications of these problems it is often reasonable to assume that the separator (i.e. the set of vertices removed) is much smaller than the size of the whole graph.
It is therefore natural to solve these problems by the means of fixed-parameter algorithms. However, designing good fixed-parameter algorithms for these problems is a very tricky task. In fact, despite many years of active research, for a number of separation problems it was not even clear if they are FPT.


In this talk I will overview current results related to design of fixed-parameter algorithms for separation problems. To make the talk self-contained, I will start from definition of the fixed-parameter algorithm and provide a simple example of such algorithm. Then I will informally describe a fixed parameter algorithm for the multiway cut problem. Then I will show how the methodology underlying this algorithm has helped to resolve fixed-parameter tractability of a number of challenging problems that stayed open for many years despite attempts of many researchers. Finally, I will overview some challenging questions that are still open.
2011/01/03Udi Wieder Microsoft Research, Silicon Valley [+] Lower Bounds on Near Neighbor Search via Metric Expansion
We show that the cell probe complexity of performing nearest neighbor (NNS) search on a metric space is tightly related to the expansion of the metric space: Given a metric space we look at the graph obtained by connecting every pair of points within a certain distance r. We then look at various notions of expansion in this graph relating them to the cell probe complexity of NNS for randomized and deterministic, exact and approximate algorithms. These relationships can be used to derive most of the known lower bounds in the well known metric spaces such as l1, l2, l1 by simply computing their expansion, as well as several new ones. Our work essentially reduces the problem of proving cell probe lower bounds of near neighbor search to that of computing the appropriate expansion parameter.

Joint work with Rina Panigrahy and Kunal Talwar
2011/01/10Moti Medina Tel Aviv University[+] Online Packet-Routing in Grids with Bounded Buffers
We present the first online algorithm with a polylogarithmic competitive ratio for the problem of online routing of packets in unidirectional grids. The goal is to maximize the throughput, i.e., the number of delivered packets. Our online algorithm is deterministic, centralized, handles packets with deadlines, allows bounded buffers, uses adaptive routing, and may drop packets before they reach their destination.

Joint work with Guy Even.
2011/01/17Uzi Vishkin University of Maryland Institute for Advanced Computer Studies (UMIACS)[+] Understanding PRAM as fault line: Too easy or.. too difficult?
Power constraints have pushed mainstream computing towards parallel computing. Unfortunately, recent high-profile publications observed that "Chipmakers are busy designing (multi-core) microprocessors that most programmers can't handle" and "Only heroic programmers can exploit the vast parallelism in current machines", and are now seeking instead new "computing stacks" that extend from the programmer's model to programming languages, compilers and architectures. I will claim that the field is running into a geodetic-type phenomenon, a fault line around the basic algorithmic model. Given a platform, is the PRAM view of p synchronous processors communicating in unit time through shared memory too easy for it, or is it too difficult? This fault line has been a central point of contention within the theory of parallel algorithms for at least 3 decades that now extends to the rest of the field. A stack cannot extend across a fault line, which means that chipmakers cannot expect a wealth of algorithms and high programmer's productivity for architectures that force programming for locality.

Till the mid-1990s I was part of a theory community that has developed a rich and elegant theory of parallel algorithms around the PRAM model. While theorists were originally content with the PRAM theory, its relentless bashing led many of them to accept claims that no parallel system can ever be built that may allow programmers to abstract it as a PRAM. Much of the talk will be devoted to the explicit multi-threaded (XMT) platform whose development I have led to disprove this claim (since I last worked at Tel Aviv University in 1996). XMT comprises a programmer's workflow, significant hardware (a 64-processor machine) and software. The workflow incorporates PRAM algorithms by using an even simpler abstraction as its starting point. I will review evidence that the unique advantages of XMT on ease of programming, teaching and learning are coupled with competitive performance.


The talk may be of general interest: 1. I will present a way for building parallel computing from first principles around mathematical induction and a one-line abstraction, similar to the only successful general-purpose platform to date (the sequential stored-program model); and 2. It demonstrates how merit sought by theory work could not have materialized without follow-up non-theory research.


References:

U. Vishkin. Using simple abstraction to reinvent computing for parallelism, CACM 54,1, January 2011 http://portal.acm.org/citation.cfm?id=1866757 for some technical material of the talk.

"The Future of Computing Performance: Game Over or Next Level? Report from the Computer Science and Telecommunications Board's (CSTB) of the U.S. National Academies, December 2010 for contrasting the opportunities of parallel computing becoming mainstream with today's reality where only heroic programmers manage to exploit the vast parallelism of current commodity hardware.


Patterson, D. The trouble with multi-core: Chipmakers are busy designing microprocessors that most programmers can't handle. IEEE Spectrum (July 2010).

Website written partially by Niko Farhi