Accepted Papers
Dimitris Fotakis, Piotr Krysta and Orestis Telelis.
Externalities among Advertisers in Sponsored Search
[Show/Hide abstract]
[Show/Hide abstract]
Abstract:
We introduce a novel computational model for single-keyword auctions in sponsored search, which explicitly models externality effects among the advertisers, an aspect that has not been (fully) reflected in the existing models, and is known to be prevalent in the behavior of real advertisers. Our model takes into account both positive and negative correlations between any pair of advertisers, and appropriately reflects them in the way they perceive an outcome of a sponsored search auction. In our model, the click-through rate of an ad depends on the set of other ads appearing in the sponsored list, on their relative order, and on their distance in the list. In contrast to previous modeling attempts, we avoid modeling end-users' behavior, but only resort to a reasonable assumption that their browsing focuses on any bounded scope section of the list. We present a comprehensive collection of computational results concerning the winner determination problem in our model with an objective to maximize the social welfare, showing both hardness of approximation results and polynomial-time approximation algorithms. We also present an exact polynomial-time algorithm for the practically relevant cases of our model, where the length of the sponsored list is at most logarithmic in the number of advertisers. This exact algorithm can be used as a truthful mechanism by employing the VCG payments, thus showing that we can fully cope with selfish behavior of advertisers when the mechanism is fully aware of their correlations. We finally study the performance of the practically used Generalized Second Price auction mechanism in our model and show that, in presence of externalities, pure Nash equilibria may not exist for conservative bidders that do not outbid their valuation. Moreover, we exhibit instances where pure Nash equilibria do exist, but they carry an unbounded loss in social welfare as compared to the socially optimal solution assignment of slots, even for such conservative bidders.
Krzysztof Apt and Evangelos Markakis.
Diffusion in Social Networks with Competing Products
[Show/Hide abstract]
[Show/Hide abstract]
Abstract:
We introduce a new threshold model of social networks, in which the nodes influenced by their neighbours can adopt one out of several alternatives. We identify a class of graphs that allow us to characterize social networks for which adoption of a product by the whole network is possible (respectively necessary) and the ones for which a unique outcome is guaranteed. We also provide a simple polynomial time algorithm that allows us to determine whether a graph belongs to this class. This directly yields polynomial time algorithms that allows us to determine whether a given social network satisfies one of the above properties. We also study algorithmic questions concerning networks without unique outcomes. In particular we show that the problem of computing the minimum possible spread of a product is NP-hard to approximate with an approximation ratio better than $\Omega(n)$, in contrast to the maximum spread, which is efficiently computable. We then move on to questions regarding the behavior of a node with respect to adopting some (resp. a given) product. We show that the problem of determining whether a given node has to adopt some (resp.~a given) product in all final networks is co-NP-complete.
Michael Brautbar and Michael Kearns.
A Clustering Coefficient Network Formation Game
[Show/Hide abstract]
[Show/Hide abstract]
Abstract:
Social and other networks have been shown empirically to exhibit high edge clustering — that is, the density of local neighborhoods, as measured by the clustering coefficient, is often much larger than the overall edge density of the network. In social networks, a desire for tight-knit circles of friendships --- the colloquial “social clique” --- is often cited as the primary driver of such structure. We introduce and analyze a new network formation game in which rational players must balance edge purchases with a desire to maximize their own clustering coefficient. Our results include the following: • Construction of a number of specific families of equilibrium networks, including ones showing that equilibria can have rather general tree-like structure, including highly asymmetric trees. This is in contrast to other network formation games that yield only symmetric equilibrium networks. Our equilibria also include ones with large or small diameter, and ones with wide variance of degrees. • A general characterization of (non-degenerate) equilibrium networks, showing that such networks are always sparse and paid for by low-degree vertices, whereas high-degree “free riders” always have low utility. • A proof that for edge cost \alpha > 1/2 the Price of Anarchy grows linearly with the population size n while for edge cost \alpha less than 1/2, the Price of Anarchy of the formation game is bounded by a constant depending only on \alpha, and independent of n. Moreover, an explicit upper bound is constructed when the edge cost is a ”simple” rational (small numerator) less than 1/2. • A proof that for edge cost \alpha less than 1/2 the average vertex clustering coefficient grows at least as fast as a function depending only on \alpha, while the overall edge density goes to zero in a rate inversely proportional to the number of vertices in the network. • Results establishing the intractability of even weakly approximating best response computations. Several of our results hold even for weaker notions of equilibrium, such as those based on link stability. We also consider other variants of the game, including a non-normalized version of clustering coefficient and bilateral edge purchases one.
Marco Comi, Bhaskar Dasgupta, Michael Schapira and Venkatakumar Srinivasan.
On Communication Protocols that Compute Almost Privately
[Show/Hide abstract]
[Show/Hide abstract]
Abstract:
A traditionally desired goal when designing auction mechanisms is incentive compatibility, i.e., ensuring that bidders fare best by truthfully reporting their preferences. A complementary goal, which has, thus far, received significantly less attention, is to preserve privacy, i.e., to ensure that bidders reveal no more information than necessary. We further investigate and generalize the approximate privacy model for two-party communication recently introduced by Feigenbaum et al.[8]. We explore the privacy properties of a natural class of communication protocols that we refer to as "dissection protocols". Dissection protocols include, among others, the bisection auction in [9,10] and the bisection protocol for the millionaires problem in [8]. Informally, in a dissection protocol the communicating parties are restricted to answering simple questions of the form "Is your input between the values \alpha and \beta (under a pre-defined order over the possible inputs)?". We prove that for a large class of functions, called tiling functions, which include the 2nd-price Vickrey auction, there always exists a dissection protocol that provides a constant average-case privacy approximation ratio for uniform or "almost uniform" probability distributions over inputs. To establish this result we present an interesting connection between the approximate privacy framework and basic concepts in computational geometry. We show that such a good privacy approximation ratio for tiling functions does not, in general, exist in the worst case. We also discuss extensions of the basic setup to more than two parties and to non-tiling functions, and provide calculations of privacy approximation ratios for two functions of interest.
Abstract:
Our main goal is to abstract existing repeated sponsored search ad auction mechanisms which includes budgets, and study their equilibrium and dynamics. Our abstraction has multiple agents biding repeatedly for multiple identical items (such as impressions in an ad auction). The agents are budget limited and have a value for per item. We abstract the repeated interaction as a one-shot game, which we call "budget auction", where agents submit a bid and a budget, and then items are sold by a sequential second price auction. Once an agent exhausts its budget it does not participate in the proceeding auctions. Our main result is that if agents bid conservatively (never bid above their value) then there always exists a pure Nash equilibrium. We also study simple dynamics of repeated budget auctions, showing their convergence to a Nash equilibrium for two agents and for multiple agents with identical budgets.
Noam Berger, Michal Feldman, Ofer Neiman and Mishael Rosenthal.
Dynamic Inefficiency: Anarchy without Stability
[Show/Hide abstract]
[Show/Hide abstract]
Abstract:
The price of anarchy \cite{papadimitriou2001} is by now a standard measure for quantifying the inefficiency introduced in games due to selfish behavior, and is defined as the ratio between the optimal outcome and the worst Nash equilibrium. However, this notion is well defined only for games that always possess a Nash equilibrium (NE). We propose the \emph{dynamic inefficiency} measure, which is roughly defined as the average inefficiency in an infinite best-response dynamic. Both the price of anarchy \cite{papadimitriou2001} and the price of sinking \cite{Goemans05} can be obtained as special cases of the dynamic inefficiency measure. We consider three natural best-response dynamic rules --- \emph{Random Walk} (\RW), \emph{Round Robin} (\RR) and \emph{Best Improvement} (\BI) --- which are distinguished according to the order in which players apply best-response moves. In order to make the above concrete, we use the proposed measure to study the job scheduling setting introduced in \cite{azar2008}, and in particular the scheduling policy introduced there. While the proposed policy achieves the best possible price of anarchy with respect to a pure NE, the game induced by the proposed policy may admit no pure NE, thus the \emph{dynamic inefficiency} measure reflects the worst case inefficiency better. We show that the dynamic inefficiency may be arbitrarily higher than the price of anarchy, in any of the three dynamic rules. As the dynamic inefficiency of the \RW dynamic coincides with the \emph{price of sinking}, this result resolves an open question raised in \cite{azar2008}. We further use the proposed measure to study the inefficiency of the Hotelling game and the facility location game. We find that using different dynamic rules may yield diverse inefficiency outcomes; moreover, it seems that no single dynamic rule is superior to another.
Abstract:
Weakly-acyclic games---a superclass of potential games---capture distributed environments where simple, globally-asynchronous interactions between strategic agents are guaranteed to converge to an equilibrium. We explore the class of routing games in~\cite{FP08,LSZ08}, which models important aspects of routing on the Internet. We show that, in interesting contexts, such routing games are weakly acyclic and, moreover, that pure Nash equilibria in such games can be found in a computationally-efficient manner.
Abstract:
We consider the problem of fairly dividing a heterogeneous cake between a number of players with different tastes. In this setting, it is known that fairness requirements may result in a suboptimal division from the social welfare standpoint. Here, we show that in some cases, discarding some of the cake and fairly dividing only the reminder may be socially preferable to any fair division of the entire cake. We study this phenomenon, providing asymptotically-tight bounds on the social improvement achievable by such discarding.
Vincenzo Bonifaci, Mahyar Salek and Guido Schäfer.
Efficiency of Restricted Tolls in Non-atomic Network Routing Games
[Show/Hide abstract]
[Show/Hide abstract]
Abstract:
An effective means to reduce the inefficiency of Nash flows in non-atomic network routing games is to impose tolls on the arcs of the network. It is a well-known fact that marginal cost tolls induce a Nash flow that corresponds to a minimum cost flow. However, despite their effectiveness, marginal cost tolls suffer from two major drawbacks, namely (i) that potentially every arc of the network is tolled, and (ii) that the imposed tolls can be arbitrarily large. In this paper, we study the restricted network toll problem in which tolls can be imposed on the arcs of the network but are restricted to not exceed a predefined threshold for every arc. We show that optimal restricted tolls can be computed efficiently for parallel-arc networks and affine latency functions. This generalizes a previous work on taxing subnetworks to arbitrary restrictions. Our algorithm is quite simple, but relies on the solving of several convex programs. The key to our approach is a characterization of the flows that are inducible by restricted tolls for single-commodity networks. We also derive bounds on the efficiency of restricted tolls for multi-commodity networks and polynomial latency functions. These bounds are tight even for parallel-arc networks. Our bounds show that restricted tolls can significantly reduce the price of anarchy if the restrictions imposed on arcs with high-degree polynomials are not too severe. Our proof is constructive. We define tolls respecting the given thresholds and show that these tolls lead to a reduced price of anarchy by using a (\lambda,\mu)-smoothness approach.
Abstract:
Consider video ads placement in commercial breaks in the television channel. Ads arrive online over time and each has an expiration date. Commercial breaks are typically of the same duration however the video ads may have an arbitrary size. Each ad has a private value and should be posted at most once in some break by its expire date. The player gets her value if her ad is broadcasted by the ad's expiration day (obviously after ad's arrival day), and otherwise has zero value. Arranging the ads into the commercial breaks while maximizing the players' profit is a classical problem of ads placement subject to the capacity constraint that should be solved truthfully. However, we are interested not only in truthfulness but also in prompt mechanism where the payment is determined for an agent when her ad is broadcasted. The promptness of the mechanism is the crucial requirement from our algorithm, as it allows to conduct a payment process without creating any redundant relation between an auctioneer and players. An inability to solve this problem can eventually prevent an application of such mechanisms in real life. We design a $6$ approximation prompt mechanism for the problem. Previously Cole et al considered the special case where all ads have the same size which is equal to the break size. They achieved $2$ approximation prompt mechanism. The general case of ads with arbitrary size is considerably more involved and requires designing a new algorithm which we call the Gravity Algorithm.
Eyal Gofer and Yishay Mansour.
Pricing Exotic Derivatives Using Regret Minimization
[Show/Hide abstract]
[Show/Hide abstract]
Abstract:
We price a wide range of financial instruments which are classified as exotic options, using the regret bounds of an online algorithm. In addition, we derive a very general result, which upper bounds the price of any derivative whose payoff is a convex function of the final asset price. The market model used is fully adversarial, making our price bounds robust. Our results significantly extend the work of DeMarzo et al., which used regret minimization to price the standard European call option, and thereby demonstrate the wide applicability of regret minimization to derivative pricing. We complement our theoretical results with an empirical study.
Bruno Escoffier, Laurent Gourves and Jerome Monnot.
The price of optimum in a matching game
[Show/Hide abstract]
[Show/Hide abstract]
Abstract:
Due to the lack of coordination, it is unlikely that the selfish players of a strategic game reach a socially good state. Using Stackelberg strategies is a popular way to improve the system’s performance. Stackelberg strategies consist of controlling the action of a fraction of the players. However compelling an agent can be costly, unpopular or just hard to implement. It is then natural to ask for the least costly way to reach a desired state. This paper deals with a simple strategic game which has a high price of anarchy: the nodes of a simple graph are independent agents who try to form pairs. We analyse the optimization problem where the action of a minimum number of players shall be fixed and any possible equilibrium of the modified game must be a social optimum (a maximum matching). For this problem, deciding whether a solution is feasible or not is not straitforward, but we prove that it can be done in polynomial time. In addition the problem is shown to be APX-hard, since its restriction to graphs admitting a vertex cover is equivalent, from the approximability point of view, to vertex cover in general graphs.
Vittorio Bilo' and Marios Mavronicolas.
Complexity of Rational and Irrational Nash Equilibria
[Show/Hide abstract]
[Show/Hide abstract]
Abstract:
We introduce two new decision problems, denoted as ∃ RATIONAL NASH and ∃ IRRATIONAL NASH, pertinent to the rationality and irrationality, respectively, of Nash equilibria for (finite) strategic games. These problems ask, given a strategic game, whether or not it admits (i) a rational Nash equilibrium where all probabilities are rational numbers, and (ii) an irrational Nash equilibrium where at least one probability is irrational, respectively. We are interested here in the complexities of ∃ RATIONAL NASH and ∃ IRRATIONAL NASH. Towards this end, we study two other decision problems, denoted as NASH-EQUIVALENCE and NASH-REDUCTION, pertinent to some mutual properties of the sets of Nash equilibria of two given strategic games with the same number of players. NASH-EQUIVALENCE asks whether the two sets of Nash equilibria coincide; we identify a restriction of its complementary problem that witnesses ∃ RATIONAL NASH. NASH-REDUCTION asks whether or not there is a so called Nash reduction (a suitable map between corresponding strategy sets of players) that yields a Nash equilibrium of the former game from a Nash equilibrium of the latter game; we identify a restriction of it that witnesses ∃ IRRATIONAL NASH. As our main result, we provide two distinct reductions to simultaneously show that (i) NASH EQUIVALENCE is co-NP-hard and ∃ RATIONAL NASH is NP-hard, and (ii) NASH-REDUCTION and ∃ IRRATIONAL NASH are NP-hard, respectively. The reductions significantly extend techniques previously employed by Conitzer and Sandholm [5,6].
Panagiota Panagopoulou and Paul Spirakis.
Random Bimatrix Games are Asymptotically Easy to Solve (A Simple Proof)
[Show/Hide abstract]
[Show/Hide abstract]
Abstract:
We focus on the problem of computing approximate Nash equilibria and well-supported approximate Nash equilibria in random bimatrix games, where each player's payoffs are bounded and independent random variables, not necessarily identically distributed, but with common expectations. We show that the completely mixed uniform strategy profile, i.e. the combination of mixed strategies (one per player) where each player plays with equal probability each one of her available pure strategies, is with high probability a $\sqrt{\frac{\ln n}{n}}$-Nash equilibrium and a $\sqrt{\frac{3\ln n}{n}}$-well supported Nash equilibrium, where $n$ is the number of pure strategies available to each player. This asserts that the completely mixed, uniform strategy profile is an \emph{almost Nash equilibrium} for random bimatrix games, since it is, with high probability, an $\epsilon$-well-supported Nash equilibrium where $\epsilon$ tends to zero as $n$ tends to infinity.
Ameya Hate, Elliot Anshelevich and Koushik Kar.
Strategic Pricing in Next-hop Routing with Elastic Demands
[Show/Hide abstract]
[Show/Hide abstract]
Abstract:
We consider a model of next-hop routing by self-interested agents. In this model, nodes in a graph (representing ISPs, Autonomous Systems, etc.) make pricing decisions of how much to charge for forwarding traffic from each of their upstream neighbors, and routing decisions of which downstream neighbors to forward traffic to (i.e., choosing the next hop). Traffic originates at a subset of these nodes that derive a utility when the traffic is routed to its destination node; the traffic demand is elastic and the utility derived from it can be different for different source nodes. Our next-hop routing and pricing model is in sharp contrast with the more common source routing and pricing models, in which the source of traffic determines the entire route from source to destination. For our model, we begin by showing sufficient conditions for prices to result in a Nash equilibrium, and in fact give an efficient algorithm to compute a Nash equilibrium which is as good as the centralized optimum, thus proving that the price of stability is 1. When only a single source node exists, then the price of anarchy is 1 as well, as long as some minor assumptions on player behavior is made. The above results hold for arbitrary convex pricing functions, but with the assumption that the utilities derived from getting traffic to its destination are linear. When utilities can be non-linear functions, we show that Nash equilibrium may not exist, even with simple discrete pricing models.
Abstract:
We embark on an agenda to investigate how stochasticity and risk-aversion transform traditional models of routing games and equilibrium concepts. Moving from deterministic to stochastic, specifically risk-averse settings, introduces nonconvexities that are difficult to analyze even in the most stylized cases of fixed variability: for these settings just computing a best response of a player is still of unknown complexity [43,41]. This paper focuses on equilibrium existence and characterization in the different cases of atomic vs. nonatomic players and exogenous vs. endogenous factors causing the variability of link delays. We also investigate the inefficiencies resulting from stochastic selfish routing and obtain surprising results that under exogenous stochasticity factors, the price of anarchy bounds are the same as those of the corresponding bounds in deterministic models.
Patrick Jordan, Mohammad Mahdian, Sergei Vassilvitskii and Erik Vee.
The Multiple Attribution Problem in Pay-Per-Conversion Advertising
[Show/Hide abstract]
[Show/Hide abstract]
Abstract:
In recent years the online advertising industry has witnessed a shift from the more traditional pay-per-impression model to the pay-per-click and more recently to the pay-per-conversion model. Such models require the ad allocation engine to translate the advertiser's value per click/conversion to value per impression. This is often done through simple models that assume that each impression of the ad stochastically leads to a click/conversion independent of other impressions of the same ad, and therefore any click/conversion can be attributed to the last impression of the ad. However, this assumption is unrealistic, especially in the context of pay-per-conversion advertising, where it is well known in the marketing literature that the consumer often goes through a purchasing funnel before they make a purchase. Decisions to buy are rarely spontaneous, and therefore are not likely to be triggered by just the last ad impression. In this paper, we observe how the current method of attribution leads to inefficiency in the allocation mechanism. We develop a fairly general model to capture how a sequence of impressions can lead to a conversion, and solve the optimal ad allocation problem in this model. We will show that this allocation can be supplemented with a payment scheme to obtain a mechanism that is incentive compatible for the advertiser and fair for the publishers.
Long Tran-Thanh, Maria Polukarov, Archie Chapman, Alex Rogers and Nicholas R. Jennings.
On the Existence of Pure Strategy Nash Equilibria in Integer-Splittable Weighted Congestion Games
[Show/Hide abstract]
[Show/Hide abstract]
Abstract:
We study the existence of pure strategy Nash equilibria (PSNE) in integer-splittable weighted congestion games (ISWCGs), where agents can strategically assign different amounts of demand to different resources, but must distribute this demand in fixed-size parts. Such scenarios arise in a wide range of application domains, including job scheduling and network routing, where agents have to allocate multiple tasks and can assign a number of tasks to a particular selected resource. Specifically, in an ISWCG, an agent has a certain total demand (aka weight) that it needs to satisfy, and can do so by requesting one or more integer units of each resource from its given subset of accessible resources. Each resource is associated with a unit-cost function of its level of congestion; as such, the cost to an agent for using a particular resource is the product of the resource unit-cost and the number of units the agent requests. While general ISWCGs do not admit PSNE (Rosenthal, 1973), the restricted subclass of these games with linear unit-cost functions has been shown to possess a potential function (Meyers,2006), and hence, PSNE. However, the linearity of costs is not necessary for the existence of equilibria in pure strategies. Thus, in this paper we prove that PSNE always exist in a larger class of ISWCGs with convex and monotonically increasing unit costs. Importantly, we show that neither monotonicity nor convexity on its own guarantees this result. Moreover, we give a counterexample with monotone and semi-convex cost functions, thus distinguishing ISWCGs from the class of infinitely-splittable congestion games for which the conditions of monotonicity and semi-convexity have been shown to be necessary and sufficient for PSNE existence (Rosen, 1965). Furthermore, we demonstrate that the finite improvement path property (FIP) does not hold for convex increasing ISWCGs. Thus, in contrast to the case with linear costs, a potential function argument cannot be used to prove our result. Instead, we provide a procedure that converges to an equilibrium from an arbitrary initial strategy profile, and in doing so show that ISWCGs with convex increasing unit-cost functions are weakly acyclic.
Abstract:
We initiate the study of game dynamics in the Sum Basic Network Creation Game, which was recently introduced by Alon et al.[SPAA'10]. In this game players are associated to vertices in a graph and are allowed to ``swap'' edges, that is to remove an incident edge and insert a new incident edge. By performing such moves, every player tries to minimize her connection cost, which is the sum of distances to all other vertices. When played on a tree, we prove that this game admits an ordinal potential function, which implies guaranteed convergence to a pure Nash Equilibrium. We show a cubic upper bound on the number of steps needed for any improving response dynamic to converge to a stable tree and propose and analyse a best response dynamic, where the players having the highest cost are allowed to move. For this dynamic we show an almost tight linear upper bound for the convergence speed. Furthermore, we contrast these positive results by showing that, when played on general graphs, this game allows best response cycles. This implies that there cannot exist an ordinal potential function and that fundamentally different techniques are required for analysing this case. For computing a best response we show a similar contrast: On the one hand we give a linear-time algorithm for computing a best response on trees even if players are allowed to swap multiple edges at a time. On the other hand we prove that this task is NP-hard even on simple general graphs, if more than one edge can be swapped at a time. The latter addresses a proposal by Alon et al..
Elizabeth Bodine-Baron, Christina Lee, Anthony Chong, Babak Hassibi and Adam Wierman.
Peer Effects and Stability in Matching Markets
[Show/Hide abstract]
[Show/Hide abstract]
Abstract:
Many-to-one matching markets exist in numerous different forms, such as college admissions, matching medical interns to hospitals for residencies, assigning housing to college students, and the classic firms and workers market. In all these markets, externalities such as complementarities and peer effects severely complicate the preference ordering of each agent. Further, research has shown that externalities lead to serious problems for market stability and for developing efficient algorithms to find stable matchings. In this paper we make the observation that peer effects are often the result of underlying social connections, and we explore a formulation of the many-to-one matching market where peer effects are derived from an underlying social network. The key feature of our model is that it captures peer effects and complementarities using utility functions, rather than traditional preference ordering. With this model and considering a weaker notion of stability, namely pairwise stability, we prove that stable matchings always exist and characterize the set of stable matchings in terms of social welfare. We also give distributed algorithms that are guaranteed to converge to a pairwise stable matching. To assess the competitive ratio of these algorithms and to more generally characterize the efficiency of matching markets with externalities, we provide general bounds on how far the welfare of the worst-case stable matching can be from the welfare of the optimal matching, and find that the structure of the social network (e.g. how well clustered the network is) plays a large role.
Zhenghui Wang and Lisa Fleischer.
Lower Bound for Envy-free and Truthful Makespan Approximation on Related Machines
[Show/Hide abstract]
[Show/Hide abstract]
Abstract:
We study problems of scheduling jobs on related machines so as to minimize the makespan in the setting where machines are strategic agents. In this problem, each job $j$ has a length $l_{j}$ and each machine $i$ has a private speed $t_{i}$. The running time of job $j$ on machine $i$ is $t_{i}l_{j}$. We seek a mechanism that obtains speed bids of machines and then assign jobs and payments to machines so that the machines have incentive to report true speeds and the allocation and payments are also envy-free. We show that 1. A deterministic envy-free, truthful, individually rational, and anonymous mechanism cannot approximate the makespan strictly better than $2-1/m$, where $m$ is the number of machines. This result contrasts with prior work giving a deterministic PTAS for envy-free anonymous assignment and a distinct deterministic PTAS for truthful anonymous mechanism. 2. For two machines of different speeds, the unique deterministic scalable allocation of any envy-free, truthful, individually rational, and anonymous mechanism is to allocate all jobs to the quickest machine. This allocation is the same as that of the VCG mechanism, yielding a 2-approximation to the minimum makespan. 3.No payments can make any of the prior published monotone and locally efficient allocations ~\cite{aas, at,ck10, dddr, kovacs} for $\qcmax$ a truthful, envy-free, individually rational, and anonymous mechanism.
Navendu Jain, Ishai Menache, Joseph Naor and Jonathan Yaniv.
A Truthful Mechanism for Value-Based Scheduling in Cloud Computing
[Show/Hide abstract]
[Show/Hide abstract]
Abstract:
Cloud computing provides an attractive computation platform, in which computing resources (e.g., virtual machines, storage capacity) are rented to end-users under a utility pricing model. The most common pricing offering is a pay-as-you-go scheme, in which users are charged a fixed price per unit resource per hour. While this scheme is simple to implement, it does not support value-based scheduling, where users assign a value to their job and the cloud schedules to maximize aggregate value (or prioritize competing jobs) under job demand and capacity constraints. In this paper, we introduce a novel pricing approach for batch jobs (e.g., image processing, financial analytics, scientific simulations). In our economic model, users submit jobs with a value function that specifies willingness to pay as function of job due dates. Focusing on social-welfare as the system objective (especially relevant for private or in-house clouds), we construct a resource allocation algorithm which obtains a (small) constant-factor approximation of maximum aggregate value, assuming that user valuations are known. Based on this algorithm, we then design an efficient truthful-in-expectation mechanism, which significantly improves the running complexity of black-box reduction mechanisms that can be applied to the problem, thereby facilitating its implementation in real systems.
Oskar Skibski.
Steady Marginality: A Uniform Approach to Shapley Value for Games with Externalities
[Show/Hide abstract]
[Show/Hide abstract]
Abstract:
The Shapley value is one of the most important solution concepts in cooperative game theory. In coalitional games without externalities, it allows to compute a unique payoff division that meets certain desirable fairness axioms. However, in many realistic applications where externalities are present, Shapley's axioms fail to indicate such a unique division. Consequently, there are many extensions of Shapley value to the environment with externalities proposed in the literature built upon additional axioms. Two important such extensions are ``externality-free'' value by Pham Do and Norde and value that ``absorbed all externalities'' by McQuillin. They are good reference points in a space of potential payoff divisions for coalitional games with externalities as they limit the space at two opposite extremes. In a recent, important publication, De Clippel and Serrano presented a marginality-based axiomatization of the value by Pham Do Norde. In this paper, we propose a dual approach to marginality which allows us to derive the value of McQuillin. Thus, we close the picture outlined by De Clippel and Serrano.
Haris Aziz, Felix Brandt and Paul Harrenstein.
Pareto Optimality in Coalition Formation
[Show/Hide abstract]
[Show/Hide abstract]
Abstract:
A minimal requirement on allocative efficiency in the social sciences is Pareto optimality. In this paper, we identify a far-reaching structural connection between Pareto optimal and perfect partitions that has various algorithmic consequences for coalition formation. In particular, we show that computing and verifying Pareto optimal partitions in general hedonic games and B-hedonic games is intractable while both problems are tractable for roommate games and W-hedonic games. The latter two positive results are obtained by reductions to maximum weight matching and clique packing, respectively.
Abstract:
In this paper, we study the combinatorial agency problem introduced by Babaioff, Feldman and Nisan and resolve some open questions posed in their original paper. Our results include a characterization of the transition behavior for the class of threshold functions. This result confirms a conjecture of Babaioff et al., and generalizes their results for the transition behavior for the OR technology and the AND technology. In addition to establishing a (tight) bound of 2 on the social Price of Unaccountability (POU) for the OR technology for the general case of n > 2 agents (the initial paper established this for n = 2, an extended version establishes a bound of 2.5 for the general case), we establish that the POU is unbounded for all other threshold functions (the initial paper established this only for the case of AND technology). We also obtain a characterization result for certain composition of anonymous technologies and establish an unbounded POU for these cases.
Abstract:
We consider mechanisms without payments for the problem of scheduling unrelated machines. Specifically, we consider truthful in expectation randomized mechanisms under the assumption that a machine (player) is bound by its reports: when a machine lies and reports value $s_{ij}$ for a task instead of the actual one $t_{ij}$, it will execute for time $s_{ij}$ if it gets the task---unless the declared value $s_{ij}$ is less than the actual value $t_{ij}$, in which case, it will execute for time $t_{ij}$. Our main technical result is an optimal mechanism for one task and $n$ players which has approximation ratio $(n+1)/2$. We also provide a matching lower bound, showing that no other mechanism can achieve a better approximation ratio. This immediately gives an approximation ratio of $(n+1)/2$ and $n(n+1)/2$ for social cost and makespan minimization, respectively, for any number of tasks.