YoungEC '17

Young Researcher Workshop on Economics and Computation

Tel Aviv, Israel

Abstracts ECON-CS

Predicting Initial Play Using Crowd Predictions (Annie Liang)

We introduce a crowdsourcing approach to the problem of predicting initial play in games, using human predictions as inputs for predicting play. These human predictions are collected on the platform Mechanical Turk, where subjects are presented with a sequence of games, and asked to either predict the action most frequently chosen, or to rank the actions by most to least frequently played. We find that naive use of the prediction data outperforms the cognitive hierarchy model, and that predictive algorithms trained on these human predictions alone outperform algorithms trained on a large set of game features. This suggests that there is signal content in crowd predictions beyond what is captured by these latter approaches, and demonstrates that humans may have useful intuitions into "strategic" behaviors, even in moderately complex settings. These findings apply both to predicting behavior in lab games and in a large set of random games. This work demonstrates the potential of crowdsourcing as a complement to current techniques for prediction of economic behaviors. (joint with Drew Fudenberg)

On Information Elicitation and Mechanism Design (Bo Waggoner)

I'll talk a bit about the field of information elicitation, where the goal is to elicit truthful predictions or beliefs from strategic agents. Then I'll draw some connections to standard mechanism design and talk about a cute problem incorporating both: how should you design an ad auction if bidders know their own click-through rates and the platform doesn't? This problem is based on a 2013 paper with Yang Cai, Mohammad Mahdian, and Aranyak Mehta.

Directions in Modeling Bounded Rationality (Ariel Rubinstein)

I will discuss a model co-authored with Jacob Glazer in order to demonstrate directions of research in Economic Theory with Bounded Rationality. In the model a listener first announces and commits to a codex i.e., a set of conditions. The speaker then presents a ðnot necessarily trueÞ profile that must satisfy the codex in order for the listener to be persuaded. The speaker is boundedly rational in the sense that his ability to come up with a persuasive profile is limited and depends on the true profile and the content and framing of the codex. The circumstances under which the listener can design a codex that will implement his goal are fully characterized.

Algorithmic Modeling of Human Behavior (James Wright)

Behavioral game theory seeks to describe the way actual people (as compared to idealized, "rational" agents) act in strategic situations. I'll start by presenting our recent work in applying machine learning to the problem of predicting human play of unrepeated, simultaneous-move games. I'll also describe some ongoing work on applying models of human behavior in some realistic settings.

Peer Grading: From Theory to Practice (Alice Gao)

In this talk, I will describe our research effort on the peer grading problem, taking it from theory to practice. I will begin by describing a theoretical investigation of approaches to motivate students to put in the effort in formulating and reporting thorough evaluations. One promising approach is peer prediction methods, which leverage the correlation between students' beliefs to prompt them to invest the effort. Unfortunately, peer prediction methods also promote students to coordinate on providing low-quality reviews and thus may fail to obtain any useful information. We show that this problem is unavoidable if students can report based on low-cost signals of the submission's quality (e.g. superficial qualities of an essay such as formatting and length). We then consider ways of circumventing this problem by comparing students' reports against the ground truth, which is available in practice through teaching assistants, who can provide a limited number of trusted reviews. Of course, when such ground truth is available, a simpler approach is possible --- rewarding each student for agreement with ground truth with some probability, and rewarding the student unconditionally otherwise. Surprisingly, we show that a naive way of combining peer prediction methods with the ground truth is inferior to this simpler method, requiring more ground truth while providing weaker incentive guarantees. I will then describe an improved hybrid scheme, and our preliminary results show that this hybrid method can eliminate all no-effort coordination while requiring less access to the ground truth than the simpler approach. Also, I will describe our ongoing work on evaluating peer prediction methods in controlled experiments and a large undergraduate computer science class. This talk is based on joint work with Hu Fu, Karan Grover, Kevin Leyton-Brown, Patrick Luk, David Ma, James Wright, and Hedayat Zarkoob.

Complexity Theory and Algorithmic Game Theory: Some New Connections (Tim Roughgarden)

We survey three recent applications of complexity theory to algorithmic game theory. First, we explain when and how communication and computational lower bounds for algorithms for an optimization problem translate to lower bounds on the price of anarchy (POA) in games derived from the problem. This is a new approach to POA lower bounds, based on reductions in lieu of explicit constructions. These lower bounds are particularly significant for problems of designing games that have only near-optimal equilibria, ranging from the design of simple combinatorial auctions to the existence of effective tolls for routing networks. Second, we study "the borders of Border's theorem," a fundamental result in auction design that gives an intuitive linear characterization of the feasible interim allocation rules of a Bayesian single-item environment. We explain a complexity-theoretic barrier that indicates, assuming standard complexity class separations, that Border's theorem cannot be extended significantly beyond the state-of-the-art. Finally, we explain "why prices need algorithms," in the sense that the guaranteed existence of pricing equilibria (like Walrasian equilibria) is inextricably connected to the computational complexity of related optimization problems: demand oracles, revenue-maximization, and welfare-maximization. This connection implies a host of impossibility results, and suggests a complexity-theoretic explanation for the lack of useful extensions of the Walrasian equilibrium concept. Includes joint work with Parikshit Gopalan, Noam Nisan, and Inbal Talgam-Cohen.

From Anomalies to Forecasts: Toward a Descriptive Model of Human Choice Behavior (Ido Erev)

Experimental studies of choice behavior document distinct, and sometimes contradictory, deviations from maximization. For example, people tend to overweight rare events in one-shot decisions under risk, and to exhibit the opposite bias when they rely on past experience. The common explanations of these results assume that the contradicting anomalies reflect situation-specific processes that involve the weighting of subjective values and the use of simple heuristics. The current paper analyzes 14 choice anomalies that have been described by different models, including the Allais, St. Petersburg, and Ellsberg paradoxes, and the reflection effect. Next, it uses a choice prediction competition methodology to clarify the interaction between the different anomalies. It focuses on decisions under risk (known payoff distributions) and under ambiguity (unknown probabilities), with and without feedback concerning the outcomes of past choices. The results demonstrate that it is not necessary to assume situation-specific processes. The distinct anomalies can be captured by assuming high sensitivity to the expected return and four additional tendencies: pessimism, bias toward equal weighting, sensitivity to payoff sign, and an effort to minimize the probability of immediate regret. Importantly, feedback increases sensitivity to probability of regret. Simple abstractions of these assumptions, variants of the model Best Estimate And Sampling Tools (BEAST), allow surprisingly accurate ex ante predictions of behavior. Unlike the popular models, BEAST does not assume subjective weighting functions or cognitive shortcuts. Rather, it assumes the use of sampling tools and reliance on small samples, in addition to the estimation of the expected values. (Based on research with Eyal Ert, Ori Plonsky, Doron Cohen, and Oded Cohen).

Planning Problems for Sophisticated Agents with Present Bias (Manish Raghavan)

Present bias, the tendency to weigh costs and benefits incurred in the present too heavily, is one of the most widespread human behavioral biases. It has also been the subject of extensive study in the behavioral economics literature. While the simplest models assume that the agents are naive, reasoning about the future without taking their bias into account, there is considerable evidence that people often behave in ways that are sophisticated with respect to present bias, making plans based on the belief that they will be present-biased in the future. For example, committing to a course of action to reduce future opportunities for procrastination or overconsumption are instances of sophisticated behavior in everyday life. Models of sophisticated behavior have lacked an underlying formalism that allows one to reason over the full space of multi-step tasks that a sophisticated agent might face. This has made it correspondingly difficult to make comparative or worst-case statements about the performance of sophisticated agents in arbitrary scenarios. In this paper, we incorporate the notion of sophistication into a graph-theoretic model that we used in recent work for modeling naive agents. This new synthesis of two formalisms - sophistication and graph-theoretic planning - uncovers a rich structure that wasn't apparent in the earlier behavioral economics work on this problem. In particular, our graph-theoretic model makes two kinds of new results possible. First, we give tight worst-case bounds on the performance of sophisticated agents in arbitrary multi-step tasks relative to the optimal plan. Second, the flexibility of our formalism makes it possible to identify new phenomena that had not been seen in prior literature: these include a surprising non-monotonic property in the use of rewards to motivate sophisticated agents and a framework for reasoning about commitment devices.

Calibration, Smooth Calibration, and Game Dynamics (Sergiu Hart)

How good is a forecaster? Assume for concreteness that every day the forecaster issues a forecast of the type "the chance of rain tomorrow is 30%." A simple test one may conduct is to calculate the proportion of rainy days out of those days that the forecast was 30%, and compare it to 30%; and do the same for all other forecasts. A forecaster is said to be _calibrated_ if, in the long run, the differences between the actual proportions of rainy days and the forecasts are small --- no matter what the weather really was. We start from the classical result that calibration can always be guaranteed by randomized forecasting procedures (a simple proof will be provided). We propose to smooth out the calibration score, which measures how good a forecaster is, by combining nearby forecasts. While regular calibration can be guaranteed only by randomized forecasting procedures, we show that smooth calibration can be guaranteed by deterministic procedures. As a consequence, it does not matter if the forecasts are leaked, i.e., made known in advance: smooth calibration can nevertheless be guaranteed (while regular calibration cannot). Moreover, our procedure has finite recall, is stationary, and all forecasts lie on a finite grid. We then consider smooth calibrated learning in n-person games, and show that in the long run it is close to Nash equilibria most of the time. (joint work with Dean Foster; Paper:

Complexity of Pricing (Noam Nisan)

As economic systems "move" to the Internet, they can become much more complex and this new complexity often becomes their defining characteristic. We will consider a very simple scenario of this form: a single seller that is selling multiple items to a single buyer. We will discuss the question of how *complex* must the pricing scheme be in order for the seller to maximize (approximately, at least) his revenue. Based on joint works with Sergiu Hart, with Shaddin Duhgmi and Li Han and with Moshe Babioff and Yannai Gonczarowski.

The FedEx Problem (Kira Goldner)

Abstract: Consider the following setting: a customer has a package and is willing to pay up to some value v to ship it, but needs it to be shipped by some deadline d. Given the joint prior distribution from which (v, d) pairs are drawn, we characterize the auction that yields optimal revenue, contributing to the very limited understanding of optimal auctions beyond the single-parameter setting. Our work further demonstrates the importance of 'ironing' in revenue maximization, helping to illustrate why randomization is necessary to achieve optimal revenue. Finally, we strengthen the emerging understanding that duality is useful for both the design and analysis of optimal auctions in multi-parameter settings. Joint work with Amos Fiat, Anna Karlin, and Elias Koutsoupias.

Beyond-X: Toward more general Prophet Inequalities and Secretary Problems (Aviad Rubinstein)


Online prediction with selfish experts (Okke Joost Schrijvers)

We consider the problem of binary prediction with expert advice in settings where experts have agency and seek to maximize their credibility. This talk makes three main contributions: first, it defines a model to reason formally about settings with selfish experts, and demonstrates that “incentive compatible” (IC) algorithms are closely related to the design of proper scoring rules. Second, we design an IC algorithm with good performance guarantees. Third, we give a formal separation between the power of symmetric scoring-rule-based prediction algorithms and that of arbitrary prediction algorithms.

The law of one price for heterogeneous good (w/ Avinatan Hassidim) (Assaf Romm)

The law of one price fails to hold in the presence of heterogeneity. Employing Shapley & Shubik’s (1971) assignment games to model markets in which buyers have different valuations for different goods, and under the assumption of bounded heterogeneity, we recover a tight connection between competitive prices. This connection holds for an increasingly large percent of markets. This approximate version of the law of one price implies extremely uneven surplus distribution even for slightly unbalanced markets, as is the case with homogeneous goods. We show that the results hold even in the presence of goods with different qualities and buyers with different sensitivity to quality.

Simultaneous Communication Complexity of Two-party Combinatorial Auctions (Jieming Mao)

We consider the following communication problem: Alice and Bob each have some valuation functions v1(·) and v2(·) over subsets of m items, and their goal is to partition the items into S, T in a way that maximizes the welfare, v1(S) + v2(T). We study both the allocation problem, which asks for a welfare-maximizing partition and the decision problem, which asks whether or not there exists a partition guaranteeing certain welfare, for binary XOS valuation functions. For interactive protocols with polynomial communication, the problems turn out to be equally challenging and a tight 3/4-approximation is known for both [Fei06, DS06]. Surprisingly, the allocation problem is provably easier for simultaneous protocols. Specifically, we show: There exists a simultaneous, randomized protocol with polynomial communication that selects a partition whose expected welfare is at least 3/4 of the optimum. This matches the guarantee of the best interactive, randomized protocol with polynomial communication. For all ε > 0, any simultaneous, randomized protocol that decides whether the welfare of the optimal partition is ≥ 1 or ≤ 3/4 − 1/108 + ε correctly with probability > 1/2 + 1/poly(m) requires exponential communication. In other words, with simultaneous, randomized, polynomial communication, it is possible to guarantee a partition within 3/4 of the optimum, without learning the quality of the output partition itself (nor the quality of the optimum within a factor of 3/4−1/108+ε, for any constant ε > 0). So the problem exhibits a separation between what is attainable for the allocation problem and decision problem. Moreover, the allocation problem can be approximated via a simultaneous protocol just as well as the best interactive protocol, whereas the decision problem exhibits a separation between what is attainable for simultaneous and interactive protocols with polynomial communication. We further discuss the implications of our results for the design of truthful combinatorial auctions in general, and extensions to general XOS valuations. In particular, our protocol for the allocation problem implies a new style of truthful mechanisms.

Algorithmic Bayesian Persuasion (Haifeng Xu)

Persuasion, defined as the act of exploiting an informational advantage in order to influence the decisions of others, is ubiquitous. Indeed, persuasive communication has been estimated to account for a considerable share of all economic activity in the US. In this paper, we examine persuasion through a computational lens and focus on perhaps the most fundamental model in this space, namely the Bayesian Persuasion model of Kamenica and Gentzkow. We study optimal Bayesian persuasion in three of the most natural input models and pin down the computational complexity for each.

Testing Choice Theories (Jan Christoph Schlegel)

We propose a new way of evaluating the power of a revealed preference experiment. We argue that revealed preference theory should be tested against the alternative hypothesis that the behavior of an agent is far away - according to a suitable rationality index - from being rationalizable. We derive a sampling procedure that samples choices from only a small fraction of all possible budget sets but still allows us to reject revealed preference theory with high power (given our speci fication of the alternative hypothesis). Using methods from the literature on property testing, we show that our sampling procedure has asymptotically optimal sample complexity.

Approximate Modularity Revisited (Inbal Talgam Cohen)

Set functions with convenient properties (such as submodularity) often arise in algorithmic game theory, and allow for improved properties of optimization algorithms and mechanisms. It is natural to ask (e.g., in the context of data driven applications) how robust such properties are, and whether small deviations from them can be tolerated. We consider two such questions in the important special case of linear set functions. One question that we address is whether any set function that approximately satisfies the modularity equation (linear functions satisfy the modularity equation exactly) is close to a linear function. The answer to this is positive (in a precise formal sense) as shown by Kalton and Roberts [1983] (and further improved by Bondarenko, Prymak, and Radchenko [2013]). We revisit their proof idea that is based on expander graphs, and provide significantly stronger upper bounds by combining it with new techniques. Furthermore, we provide improved lower bounds for this problem. Another question that we address is that of how to learn a linear function $h$ that is close to an approximately linear function $f$, while querying the value of $f$ on only a small number of sets. Joint work with Uriel Feige and Michal Feldman.

Learning in Games: Robustness of Fast Convergence (Thodoris Lykouris)

In this talk, we show that learning algorithms satisfying a low approximate regret property experience fast convergence to approximate optimality in a large class of repeated games. Our property, which simply requires that each learner has small regret compared to a $(1+\epsilon)$-multiplicative approximation to the best action in hindsight, is ubiquitous among learning algorithms; it is satisfied even by the vanilla Hedge forecaster. Our results improve upon recent work of Syrgkanis et al. in a number of ways. We require only that players observe payoffs under other players' realized actions, as opposed to expected payoffs. We further show that convergence occurs with high probability, and show convergence under bandit feedback. Finally, we improve upon the speed of convergence by a factor of $n$, the number of players. Both the scope of settings and the class of algorithms for which our analysis provides fast convergence are considerably broader than in previous work. Our framework also applies to settings where the player set is evolving over time. In a joint work with Vasilis Syrgkanis and Eva Tardos, we introduced the model of dynamic population games where at each round every player changes type independently with some fixed probability. There we proved that when players use algorithms satisfying low regret for all time intervals (adaptive regret) and the underlying optimization problem satisfies some stability property, the resulting outcomes are efficient even when a large fraction of the population changes type at every step. Via a shifting extension of low approximate regret, we strengthen these results by allowing players to select learning algorithms from a larger class, which includes a minor variant of the basic Hedge algorithm. Joint work with Dylan J. Foster, Zhiyuan Li, Karthik Sridharan, and Eva Tardos.

Nash social welfare for strategic agents (Simina Brânzei)

The Fisher market is a fundamental model of an economy, which captures the one round version of the resource allocation problem and has strong fairness and efficiency guarantees. The Fisher market can be used as a mechanism for allocating goods, except its implementation is impeded by a strong informational assumption: preferences should be public. In this talk I will discuss the performance of the Fisher market mechanism on its own benchmark---the celebrated Nash social welfare---when information is private, on two representative classes of valuations: perfect substitutes and complements. For perfect substitutes, the Nash equilibria of the Fisher market turns out to approximate the optimal Nash social welfare within a constant factor; however, for perfect complements, their performance is arbitrarily bad, yielding very inefficient solutions. Strikingly, the Trading Post mechanism---an indirect market mechanism also known as the Shapley-Shubik game---has significantly better performance than the Fisher market on its own benchmark. Not only does Trading Post attain a matching bound for perfect substitutes, but it also implements almost perfectly the Nash social welfare objective for perfect complements, achieving an approximation of 1+epsilon for every epsilon>0. Moreover, all the equilibria of the Trading Post mechanism are pure and satisfy an important notion of individual fairness known as proportionality.

Planning Problems for Sophisticated Agents with Present Bias (Manish Raghavan)

Present bias, the tendency to weigh costs and benefits incurred in the present too heavily, is one of the most widespread human behavioral biases. It has also been the subject of extensive study in the behavioral economics literature. While the simplest models assume that the agents are naive, reasoning about the future without taking their bias into account, there is considerable evidence that people often behave in ways that are sophisticated with respect to present bias, making plans based on the belief that they will be present-biased in the future. For example, committing to a course of action to reduce future opportunities for procrastination or overconsumption are instances of sophisticated behavior in everyday life. Models of sophisticated behavior have lacked an underlying formalism that allows one to reason over the full space of multi-step tasks that a sophisticated agent might face. This has made it correspondingly difficult to make comparative or worst-case statements about the performance of sophisticated agents in arbitrary scenarios. In this paper, we incorporate the notion of sophistication into a graph-theoretic model that we used in recent work for modeling naive agents. This new synthesis of two formalisms - sophistication and graph-theoretic planning - uncovers a rich structure that wasn't apparent in the earlier behavioral economics work on this problem. In particular, our graph-theoretic model makes two kinds of new results possible. First, we give tight worst-case bounds on the performance of sophisticated agents in arbitrary multi-step tasks relative to the optimal plan. Second, the flexibility of our formalism makes it possible to identify new phenomena that had not been seen in prior literature: these include a surprising non-monotonic property in the use of rewards to motivate sophisticated agents and a framework for reasoning about commitment devices.

Geometric Approaches to Auction Design (Zihe Wang)

Revenue-optimal auction design has been a topic of intensive research over the past twenty years. In multi-item auctions, a closed characterization of the optimal mechanism is still open, even for relatively simple distributions. In the talk, we introduce two approaches to tackle the problem. The first approach focuses on the local geometric property of the utility function. We analyze the changes of revenue with respect to the local geometric adjustments of the utility of the buyer. The second approach studies the application of Border’s Theorem. We consider the problem of designing revenue-optimal auctions for selling two items and bidders’ valuations are independent among bidders but negatively correlated among items.

A Simple Model for the Top Trading Cycles School Choice Mechanism Using Fluid Approximation (Irene Yuan Lo)

Many cities determine the assignment of students to schools through a school choice mechanism which calculates an assignment based on student preferences and school priorities. The prominent Top Trading Cycles (TTC) algorithm can be applied to give a strategy-proof and Pareto efficient assignment, but the combinatorial description of TTC makes it non-transparent to parents and difficult to analyze for designers. Using a fluid approximation model, we give a tractable characterization of the TTC mechanism for school choice: the TTC assignment can be simply described by n^{2} admission thresholds, where n is the number of schools, and these thresholds can be calculated by a differential equation. The model also allows us to analyze general sequential trade procedures, and show that changes in trading priority can have non-trivial indirect effects on the allocation. We also apply this model to solve for optimal investment in school quality, and help explain empirical findings about the relation between the TTC assignment and the Deferred Acceptance assignment. To validate the fluid model we show that it gives a good approximation for strongly converging economies. Our analysis draws on an interesting connection between continuous trading procedures and continuous time Markov chains.

The Strange Case of Privacy in Equilibrium Models (Rachel Cummings)

We study how privacy technologies affect user and advertiser behavior in a simple economic model of targeted advertising. In our model, a consumer first decides whether or not to buy a good, and then an advertiser chooses an advertisement to show the consumer. The consumer's value for the good is correlated with her type, which determines which ad the advertiser would prefer to show to her---and hence, the advertiser would like to use information about the consumer's purchase decision to target the ad that he shows. In our model, the advertiser is given only a differentially private signal about the consumer's behavior---which can range from no signal at all to a perfect signal, as we vary the differential privacy parameter. This allows us to study equilibrium behavior as a function of the level of privacy provided to the consumer. We show that this behavior can be highly counter-intuitive, and that the effect of adding privacy in equilibrium can be completely different from what we would expect if we ignored equilibrium incentives. Specifically, we show that increasing the level of privacy can actually increase the amount of information about the consumer's type contained in the signal the advertiser receives, lead to decreased utility for the consumer, and increased profit for the advertiser, and that generally these quantities can be non-monotonic and even discontinuous in the privacy level of the signal. Joint work with Katrina Ligett, Mallesh Pai, and Aaron Roth.

The Competition Complexity of Auctions: A Bulow-Klemperer Result for Multi-Dimensional Bidders (Alon Eden)

A seminal result of Bulow and Klemperer (BK) shows the power of competition for extracting revenue: when selling a single item to n bidders whose values are independently drawn from a regular distribution, the simple welfare-maximizing VCG mechanism (in this case, a second price-auction) with one additional bidder extracts at least as much revenue in expectation as the optimal mechanism. The beauty of this theorem stems from the fact that VCG is prior-independent mechanism, where the seller possesses no information about the distribution, and yet, by recruiting one additional bidder, it performs better than any priordependent mechanism tailored exactly to the distribution at hand. This result, however, is restricted to the simple single-item setting described above. Recent years have seen a great progress in broadening our understanding of revenue maximizing mechanisms to multi-dimensional settings. Most of these extensions, however, have focused almost exclusively on approximation results obtained by prior-dependent mechanisms. In this work, we establish the first full BK results in multi-dimensional environments, where VCG mechanisms surpass the revenue of the optimal -(possibly randomized, Bayesian incentive compatible) mechanism by recruiting additional bidders. For a given environment with n symmetric bidders and m heterogeneous items, we term the number of additional bidders needed for this guarantee the environment’s competition complexity. Extending the recent duality-based framework of Cai et al. for reasoning about optimal revenue, we show that the competition complexity of additive valuations drawn from independent and regular item distributions is at most n + 2m- 2 and at least log(m). We extend our results to additive valuations subject to downward-closed constraints, showing that these significantly more general valuations increase the competition complexity by at most an additive m-1 factor, and further improve this bound for the special case of matroid constraints.

Spilling Secrets to Competitors: The Value of Information Revelation under Price Competition (Shoshana Vasserman) Joint work with Greg Lewis, and Nageeb Ali.

Consider two firms selling differentiated products, engaged in price competition to a single customer. The firms do not know the customer’s preferences, and the customer who knows her own preferences can signal her type to these firms. We consider four signaling schemes, each of which is a communication technology that imposes restrictions on the kinds of signals that the consumer can send. The four schemes correspond to varying the communication technology along two dimensions: the ability to send credible messages (“hard information”) vs. communicating through cheap talk only (“soft information”), and the ability to strategically send asymmetric messages to different firms vs. being forced to send identical messages to both firms. Credible messages might be thought of as cookies, which cannot be forged (by the average consumer) and so can be thought of as verifiable. A symmetric credible signaling scheme might be a cookie sharing API, in which participating consumers’ cookies are stored by a centralized platform and made accessible to all firms competing through the platform. An asymmetric credible signaling scheme, on the other hand, might be a decentralized cookie collection environment in which each firm obtains only the cookies of consumers who browse its website, so that consumers can discriminate in their disclosure of cookies. Cheap talk messages are unverifiable communications such as survey answers, which consumers can provide at their leisure and feign at their whim. We characterize the Perfect Bayesian Equilibrium outcomes in each case, and discuss the resulting welfare considerations.

Efficient Empirical Revenue Maximization in Single-Parameter Auction Environments (Yannai Gonczarowski)

We present a polynomial-time algorithm that, given samples from the unknown valuation distribution of each bidder, learns an auction that approximately maximizes the auctioneer's revenue in a variety of single-parameter auction environments including matroid environments, position environments, and the public project environment. The valuation distributions may be arbitrary bounded distributions (in particular, they may be irregular, and may differ for the various bidders), thus resolving a problem left open by previous papers. The analysis uses basic tools, is performed in its entirety in value-space, and simplifies the analysis of previously known results for special cases. Furthermore, the analysis extends to certain single-parameter auction environments where precise revenue maximization is known to be intractable, such as knapsack environments.

Posted price mechanisms (Michal Feldman)

Posted price mechanisms are simple, straightforward, and strategyproof. We study two scenarios of combinatorial markets where sequential posted price mechanisms achieve optimal or nearly optimal welfare. The first scenario is matching markets with full information, where optimal welfare is obtained. The second is markets with submodular (and XOS) valuations with Bayesian information, where half of the optimal welfare is obtained. We distinguish between static and dynamic pricing, and present various extensions of the above findings. Based on joint works with Vincent Cohen-Addad, Alon Eden and Amos Fiat (2016), with Nick Gravin and Brendan Lucier (2015) and with Paul Duetting, Thomas Kesselheim and Brendan Lucier (2016).

Recent Progress on Simple Multi-item Auctions (Andrew Yao)

In 2012 Hart and Nisan initiated a line of investigation on the design and performance analysis of multi-item auctions. A key feature of their approach is its emphasis on algorithmic simplicity, distributional generality, and conceptual novelity in techniques. Many have been inspired to follow up on this line of research, and obtain surprisingly simple auctions with provable constant optimal revenue, for an impressively ever wider class of problems. In this talk we survey som e of the results and insights learned, and discuss possible future directions.

Centralized Clearinghouse Design: A Quantity-Quality Tradeoff (Nick Arnosti)

Stable matching mechanisms are used to clear many two-sided markets. In practice, these mechanisms leave many agents on both sides unmatched. What factors determine the number of unmatched agents, and the quality of matches that do form? This paper answers these questions, with a particular focus on how match outcomes depend on correlations in agent preferences. I consider three canonical preference structures: fully idiosyncratic preferences, common preferences (agents agree on the attractiveness of those on the opposite side), and aligned preferences (potential partners agree on the attractiveness of their match). I find that idiosyncratic preferences result in more matches than common preferences do. Perhaps more surprisingly, the case of aligned preferences results in the fewest matches. Regarding match quality, the story reverses itself: aligned preferences produce the most high quality matches, followed by common preferences. These facts have implications for the design of priority rules and tie-breaking procedures in school choice settings, and establish a fundamental tradeoff between matching many students, and maximizing the number of students who get one of their top choices.

AI with GT Flavor​​ (Moshe Tennenholtz)

The design of "game theoretic" agents is an ultimate challenge for AI : How should an agent reason / plan / learn in order to optimize decision facing other decision makers? How should a mediator lead a set of agents who “selfishly” reason/plan/learn to a socially efficient outcome? The objective of this talk is to share a (biased and partial) perspective on some of the theories emerged while addressing optimization, reasoning, and learning, when required to be applicable to multi-agent settings.

Pricing Social Goods (Tomer Ezra)

Social goods are goods that grant value not only to their owner but also to the owner's surrounding, including family, friends, office mates and acquaintances. The magnitude of the benefit a non-owner party derives from a good is a effected by many factors, including the type or the good, its availability, and the social status of the non-owner. Depending on this magnitude and on the price of the good, a potential buyer might stay away from purchasing the good, hoping to rely on others. A seller who sells social goods must take these considerations into account when setting prices for the good. While the literature on optimal pricing has advanced considerably over the last decade, only little is known about optimal pricing schemes for selling social goods. In this paper, we introduce a Bayesian model of social good sales, and devise nearly optimal pricing schemes for various types of externalities, and for both simultaneous and sequential sales.

Does Information Revelation Improve Revenue? (Christos Tzamos)

We study the problem of optimal auction design in a valuation model, explicitly motivated by online ad auctions, in which there is two-way informational asymmetry, in the sense that private information is available to both the seller (the item type) and the bidders (their type), and the value of each bidder for the item depends both on his own and the item’s type. Importantly, we allow arbitrary auction formats involving, potentially, several rounds of signaling from the seller and decisions by the bidders, and seek to find the optimum co-design of signaling and auction (we call this optimum the “optimum augmented auction”). We characterize exactly the optimum augmented auction for our valuation model by establishing its equivalence with a multi-item Bayesian auction with additive bidders. Surprisingly, in the optimum augmented auction there is no signaling whatsoever, and in fact the seller need not access the available information about the item type until after the bidder chooses his bid. Suboptimal solutions to this problem, which have appeared in the recent literature, are shown to correspond to well-studied ways to approximate multi-item auctions by simpler formats, such as grand-bundling (this corresponds to Myerson’s auction without any information revelation), selling items separately (this corresponds to Myerson’s auction preceded by full information revelation as in [FJM+12]), and fractional partitioning (this corresponds to Myerson’s auction preceded by optimal signaling). Consequently, all these solutions are separated by large approximation gaps from the optimum revenue.

A Simple and Approximately Optimal Mechanism for a Buyer with Complements (Ophir Friedler)

We consider a revenue-maximizing seller with m heterogeneous items and a single buyer whose valuation v for the items may exhibit both substitutes (i.e., for some S; T, v(S \cup T) < v(S) + v(T)) and complements (i.e., for some S; T, v(S \cup (T) > v(S) + v(T)). We show that the better of selling the items separately and bundling them together guarantees a \Theta (d) fraction of the optimal revenue, where d is a measure on the degree of complementarity. Note that this is the first approximately optimal mechanism for a buyer whose valuation exhibits any kind of complementarity, and extends the work of Rubinstein and Weinberg [2015], which proved that the same simple mechanisms achieve a constant factor approximation when buyer valuations are subadditive, the most general class of complement-free valuations. Our proof is enabled by the recent duality framework developed in Cai et al. [2016], which we use to obtain a bound on the optimal revenue in this setting. Our main technical contributions are specialized to handle the intricacies of settings with complements, and include an algorithm for partitioning edges in a hypergraph. Even nailing down the right model and notion of "degree of complementarity" to obtain meaningful results is of interest, as the natural extensions of previous definitions provably fail. Joint work of: Alon Eden, Michal Feldman, Ophir Friedler, Inbal Talgam-Cohen, S. Matthew Weinberg.

Important information

Dates: 1st January to 5th January, 2017
Venue:   Tel Aviv University, Beit-Hatfutsot

Recent updates

This conference is supported by the European Research Council (ERC)
and also by