The ACGO seminars are held every week as part of an effort of a group of researchers around the topics of Algorithms, Combinatorics, Game Theory and Optimization. Our team consists of researchers in different disciplines and from several chilean universities. If you are interested in giving a talk or receiving the announcements of the seminars, please send us an email to jverschae followed by uc.cl.

**2018-16-05:**

**Author: ** Tjark Vredeveld, Maastricht University.

**Where:** Republica 701, Sala 33.

**When:** Wednesday, May 16, 14:30.

**Title:** On the equitable Hamiltonian Cycle problem

**Abstract:** Kinable, Smeulders, Delcour, and Spieksma (2017) introduced the Equitable TSP (E-TSP). In the E-TSP, we are given an even number of cities and distances between each pair of these. Instead of finding a tour of minimum length, Kinable et al. (uniquely) decomposed the tour in two perfect matchings, one with “even” edges and one with “odd” edges and the goal is to minimize the difference between the costs of the two perfect matchings. Kinable et al. show that the E-TSP is strongly NP-hard by reduction from Hamiltonian Cycle. The reduction resembles the reduction to show that TSP is strongly NP-hard. In the reduction for TSP, it suffices to set each distance to one of two values, e.g., 0 or 1. The reduction used for the E-TSP, however, needs to have many different values for the distances. This raises the question whether the optimal solution-value to the E-TSP when each distance can only be one of two values needs to be 0 or is positive.

We generalized this question to the Equitable Hamiltionian Cycle Problem: given a complete graph on an even number of vertices in which each edge is either red or blue, can we find a tour on all vertices such that the “even” perfect matching has as many red edges as the “odd” perfect matching. We will show that the answer to this question is NO and YES, and for the YES-case we present the idea on how to find such a tour in O(n log n) or even O(n) time. Furthermore, if time permits, we also discuss how well some simple local search algorithm will perform.

This is joint work with Frits Spieksma and Tim Ophelders (TU Eindhoven)

**2018-09-05:**

**Author: ** Jeremy Barbay, DCC, U Chile.

**Where:** Republica 701, Sala 33.

**When:** Wednesday, May 9, 14:30.

**Title:** Adaptive Computation of Frechet Distance: Upper and Conditional Lower Bound

**Abstract:** The Fréchet distance is a measure of similarity between curves that takes into account the location and ordering of the points along the curves, and permits to abstract, among other things, differences of resolution between the two curves (with application to morphing, handwriting recognition and protein structure alignment, among others).

In 1991, Art and Godau introdued this measure to Computational Geometry, describing an algorithm computing the Fréchet distance between two polygonal curves composed of $n$ and $m$ segments respectively in time within $O(mn\log(mn))$, the first algorithm to do so in polynomial time. In 1994, Eiter and Mannila extended the notion of the Fréchet distance between curves to the Fréchet distance between sequence of points of respective sizes $n$ and $m$, demonstrating that the latter cam be computed in time within $O(nm)$ and space within $O(n+m)$, and that it gives a good approximation of the Fréchet distance.

After 20 years without major improvements and no better lower bound than $\Omega(n+m)$, Bringmann decribed in 2014 the first conditional lower bound for the computation of the Discrete Fréchet Distance, via a clever reduction to CNF-SAT, under the Strong Exponential Time Hypothesis (SETH), which yield in turn many similar results for other problems using similar dynamic programming techniques (e.g. DIR Edit Distance).

We propose a refinement of the algorithm and the analysis from Eiter and Mannila to take into account not only the sizes $n$ and $m$ of the point sequences, but also of the difficulty $\omega$ to certify the resulting distance, and the corresponding refinement of the conditional lower bound from Bringmann to take into account this new parameter. It is our hope that such a refinement of the analysis can be (easily?) extended to other problems for which dynamic programming has given good results in the past.

**2018-02-05:**

**Author: ** Jan Corsten, The London School of Economics.

**Where:** Republica 701, Sala 33.

**When:** Wednesday, May 2, 14:30.

**Title:** Tiling edge-coloured complete graphs with few monochromatic graphs of bounded degree

**Abstract:** Erdős, Gyárfás and Pyber proved in 1991 that the vertices of every $r$-edge-coloured complete graph on $n$ vertices can be partitioned into $O(r^2 log r)$ monochromatic cycles. It is very interesting and maybe surprising that this bound is independent of $n$. Grinshpun and Sárközy showed that in the case of two colours, a similar conclusion is true not only for cycles but any family of graphs of bounded degree. In this talk, we consider the same problem for multiple colours.

**2018-25-04:**

**Author: ** Andreas Toennis, U Chile & MPII.

**Where:** Republica 701, Sala 33.

**When:** Wednesday, April 25, 14:30.

**Title:** Lower Bounds for Online Matching on the Line.

**Abstract:** In the online matching on the line problem, the task is to match a set of requests R online to a given set of servers S. The distance metric between any two points in R ∪ S is a line metric and the objective for the online algorithm is to minimize the sum of distances between matched server-request pairs. This problem is well-studied and – despite recent improvements – there is still a large gap between the best known lower and upper bounds: The best known deterministic algorithm for the problem is O(log^2(n))-competitive, while the best known deterministic lower bound is 9 .001. The lower and upper bounds for randomized algorithms are 4.5 and O(log n) respectively.

We prove that any deterministic online algorithm which in each round: ( i) bases the matching decision only on information local to the current request, and (ii) is symmetric (in the sense that the decision corresponding to the mirror image of some instance I is the mirror image of the decision corresponding to instance I), must be Ω(log n)-competitive. We then extend the result by showing that it also holds when relaxing the symmetry property so that the algorithm might prefer one side over the other, but only up to some degree. This proves a barrier of Ω(log n) on the competitive ratio for a large class of “natural” algorithms. This class includes all deterministic online algorithms found in the literature so far.

Furthermore, we show that our result can be extended to randomized algorithms that locally induce a symmetric distribution over the chosen servers. The Ω(log n)-barrier on the competitive ratio holds for this class of algorithms as well.

**2018-16-04:**

**Author: ** Neil Olver, VU University Amsterdan and CWI.

**Where:** Republica 701, Sala 33.

**When:** Wednesday, April 16, 14:30.

**Title:** The long-term behaviour of a queue-based dynamic traffic model.

**Abstract:** The fluid queueing model, introduced by Vickrey in ’69, is probably the simplest model that plausibly captures the notion of time-varying flows. Although the model is quite simple, our current theoretical understanding of equilibrium behaviour in this model is rather limited, and many fundamental questions remain open. I will discuss the resolution of a deceptively simple-sounding problem: do queue lengths remain bounded in the equilibria under natural necessary conditions? (Joint work with Roberto Cominetti and José Correa).

**2018-11-04:**

**Author: ** J. Correa, U Chile.

**Where:** Republica 701, Sala 33.

**When:** Wednesday, April 11, 14:30.

**Title:** An open problem on matchings.

**2018-04-04:**

**Author: ** Fábio Botler, Andrés Cristi, Andreas Tönnis, and Kevin Schewior, U Chile.

**Where:** Republica 701, Sala 33.

**When:** **Wednesday, April 04, 14:30.**

**Title:** SUPERSET: A (super)natural variant of the card game SET.

**Abstract:** We consider SUPERSET, a lesser-known but interesting variant of the famous card game SET. Here, players look for SUPERSETs instead of SETs, that is, the symmetric difference of two SETs that intersect in exactly one card. In this paper, we pose questions that have been previously posed for SET and provide answers to them; we also show relations between SET and SUPERSET.

For the regular deck of cards, which can be identified with F_3^4, we give an elegant proof for the fact that the maximum number of cards that can be on the table without having a SUPERSET is 9, solving an open question (McMahon et al., 2016). For the deck corresponding to F_3^d, we show that this number is Omega(1.442^d) and O(1.733^d). We also compute probabilities of the presence of a superset in a collection of cards drawn uniformly at random. Finally, we consider the computational complexity of deciding wether a multi-value version of SET or SUPERSET is contained in a given set of cards, and show an FPT-reduction from the problem for SET to that for SUPERSET, implying W[1]-hardness of the problem for SUPERSET.

This is joint work with Ruben Hoeksma.

**2018-28-03:**

**Author: ** Yossi Arjevani, Weizmann Institute of Science.

**Where:** Republica 701, Sala 33.

**When:** **Wednesday, March 28, 14:30.**

**Title:** Structural-based Approach to Complexity of Optimization Algorithms.

**Abstract:**The past few decades have witnessed tremendous progress in the field of Mathematical Optimization which led to a large proliferation of new optimization methods to many scientific fields. Among these is the field of machine learning whose applicability heavily relies on solvers which are capable of efficiently solving challenging large-scale optimization problems. Notwithstanding, we still lack an adequate complexity theory which satisfactorily quantifies the computational resources required for solving optimization problems of this nature. Motivated by the limitations of current computational models, in the present work we propose novel theoretical foundations for investigating the computational boundaries of optimization algorithms which scale well with the dimension of the problem. We then present recent results regarding application of this technique on various types of optimization problems.

**2018-21-03:**

**Author: ** Tim Oosterwijk, U Chile.

**Where:** Republica 701, Sala 33.

**When:** **Wednesday, March 21, 14:30.**

**Title:** Approximating Vector Scheduling

**Abstract:** In this talk we will consider the Vector Scheduling problem, a natural generalization of the classical makespan minimization problem to multiple resources. Here, we are given n jobs, represented as d-dimensional vectors in [0,1]^d, and m identical machines, and the goal is to assign the jobs to machines such that the maximum load of each machine over all the coordinates is at most 1. For fixed d, the problem admits an approximation scheme, and the best known running time is n^{f(epsilon,d)}, where f(epsilon,d) = (1/epsilon)^{Õ(d)}, where Õ suppresses polylogarithmic terms in d. In particular, the dependence on d is double exponential. In this talk we will show that a double exponential dependence on d is necessary, and give an improved algorithm with essentially optimal running time. This gives the first efficient approximation scheme (EPTAS) for the problem. This is joint work with Nikhil Bansal, Tjark Vredeveld and Ruben van der Zwaan.

**2018-14-03:**

**Author: ** Bruno Ziliotto.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** **Wednesday, March 14, 14:30.**

**Title:** Markov Decision Processes with long duration

**Abstract:** In a Markov Decision Process (MDP), at each stage, knowing the current state, the decision-maker chooses an action, and receives a reward depending on the current state of the world. Then a new state is randomly drawn from a distribution depending on the action and on the past state. Many optimal payoffs concepts have been introduced to analyze the strategic aspects of MDPs with long duration: asymptotic value, uniform value, liminf average payoff criterion… We provide sufficient conditions under which these concepts coincide, and discuss some open problems. (Joint work with Xavier Venel, Paris 1 University).

**2018-05-01:**

**Author: ** Kurt Mehlhorn, MPII.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** **Friday, January 05, 12:00.**

**Title:** Markets and Fair Division of Goods.

**Abstract:** Items have to be distributed to agents in a fair manner. The agents have valuations for the goods and the value of a set of goods is simply the sum of the valuations. Nash introduced axioms for fairness and showed that for divisible goods maximizing the product of the agent’s valuations leads to a fair division. The allocation maximizing the product is the same as the allocation in a Fisher market in which all agents have the same budget. For indivisible goods the problem is harder. Cole/Gkatzelis gave an approximation algorithm based on rounding the allocation in a Fisher market with earning caps. We go further and consider Fisher markets with earning and utility caps, i.e., buyers do not want to buy goods beyond a certain utility and sellers do not want to earn money above a certain amount. We show how to computer approximate equilibria in such markets and how this leads to fair divisions for buyers with utility caps. The talk is based on papers in SODA ’18, SAGT ’17, ESA ’16, and SODA ’16.

**2018-05-01:**

**Author: ** Alfredo Torrico, GaTech.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** **Wednesday, January 03, 14:00.**

**Title:** Robust Submodular Maximization: Offline and Online algorithms.

**Abstract:** In this work, we consider the robust submodular maximization problem subject to a matroid constraint in the offline and online setting. In the offline version, we are given a collection of k monotone submodular functions and matroid on a ground set of size n. The goal is to select one independent set that maximizes the minimum of the submodular functions. Given the complexity of the problem, we design a bi-criteria approximation algorithm that returns a set that is the union of a logarithmic number of independent sets. In the online version of the problem, we receive a new collection of functions at each time step and aim to pick an independent set in every stage. We measure the performance of the algorithm in the expected regret setting. Again, we present a bi-criteria approximation algorithm which gives a (nearly) optimal approximation as well as regret bounds. Our result rely crucially on modifying the Follow-the-Perturbed-Leader algorithm of Kalai and Vempala.

**2017-27-12:**

**Author: ** Nicolás Sanhueza, University of Birmingham.

**Where:** Republica 779B, **Sala P3, 3rd floor**.

**When:** Wednesday, December 27, 14:30.

**Title:** An asymptotic bound for the strong chromatic number.

**Abstract:** The strong chromatic number of a graph G on n vertices is the least number r with the following property: after adding isolated vertices to G so that the number of vertices is divisible by r, and taking the union with any collection of spanning disjoint copies of K_r in the same vertex set; the resulting graph has a proper vertex-colouring with r colours.

We show that for every c>0 and every graph G on n vertices with maximum degree Δ(G)≥cn, the strong chromatic number is at most (2+o(1))Δ(G), which is asymptotically best possible.

**2017-20-12:**

**Author: ** Kevin Schewior, U Chile.

**Where:** Republica 779B, **Sala P3, 3rd floor**.

**When:** Wednesday, December 20, 14:30.

**Title:** The Itinerant List-Update Problem

**Abstract:** We introduce the Itinerant List-Update Problem (ILU), which is a relaxation of the classic List-Update Problem (LU) in which the pointer no longer has to return to a home location after each request. The motivation to introduce ILU arises from the application of track management in Domain Wall Memory. We first show an Omega(log n) lower bound on the competitive ratio for any randomized online algorithm for ILU. This shows that online ILU is harder than online LU, for which O(1)-competitive algorithms, like Move-To-Front, are known. We then show that ILU is essentially equivalent to a variation of the Minimum Linear Arrangement Problem (MLA), which we call the Dynamic Minimum Linear Arrangement (DMLA) problem. We then give an offline polynomial-time algorithm for DMLA and show that it has an polylogarithmic approximation guarantee.

This is joint work with Neil Olver, Kirk Pruhs, René Sitters, and Leen Stougie.

**2017-13-12:**

**Author: ** Bhalchandra D. Thatte (UFMG, Brazil)

**Where:** Republica 779B, **Sala P2, 2nd floor**.

**When:** Wednesday, December 13, 14:30.

**Title:** Subgraph posets and graph reconstruction.

**Abstract:** In the 1940s, Ulam and Kelly conjectured that a simple undirected finite graph on at least 3 vertices can be reconstructed (up to isomorphism) from the collection of its unlabelled vertex-deleted subgraphs. In this talk I will give an introduction to the problem and some of its variations. I will then discuss some of my work on the problem of reconstructing graphs from certain partially ordered sets of subgraphs, and its relation to the reconstruction problem of Ulam and Kelly.

**2017-06-12:**

**Author: ** Andreas Tönnis, U Chile.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, December 6, 14:30.

**Title:** The Temp Secretary Problem and Extensions.

**Abstract:** The Temp Secretary Problem was recently introduced by Fiat et al. It is

a generalization of the Secretary Problem, in which commitments are temporary for a fixed duration. This problem is closely related to online interval scheduling with random arrival dates and other online “on-the-line” problems. We present a simple online algorithm with improved performance guarantees for unit-sized items with uniform arrival rates. Our algorithmic approach uses a non-temporal relaxation. This allows us to bound the collected expected value by known techniques for the random-order model. We discuss algorithmic ideas, basic proof techniques and extensions to open problems like non-uniform arrival rates, non-uniform capacities and covering variants. This talk is based on joined work with Thomas Kesselheim.

**2017-22-11:**

**Author: ** Johannes Fischer, TU Dortmund.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, November 22, 14:30.

**Title:** Simple, Fast and Lightweight Parallel Wavelet Tree Construction.

**Abstract:** The wavelet tree (Grossi et al. 2003) and wavelet matrix (Claude et al. 2015) are compact indices for texts over an alphabet [0,\sigma) that support rank, select and access queries in O(log \sigma) time. We first present new practical sequential and parallel algorithms for wavelet tree construction. Their unifying characteristics is that they construct the wavelet tree bottom-up, i.e., they compute the last level first. We also show that this bottom-up construction can easily be adapted to wavelet matrices. In practice, our best sequential algorithm is up to twice as fast as the currently fastest sequential wavelet tree construction algorithm (serialWT), simultaneously saving a factor of 2 in space. This scales up to 32 cores, where we are about equally fast as recWT (the currently fastest parallel wavelet tree construction algorithm), but still use only about 75% of the space.

**2017-15-11:**

**Author: ** Claudio Telha Cornejo, CORE & OM Partners.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, November 15, 14:30.

**Title:** A competitive uncapacitated lot-sizing game.

**Abstract:** We study a game merging the lot-sizing problem with a Cournot competition. Each player is a producer with her own production facility, modeled as an uncapacitated lot-sizing problem (i.e., production incurs set-up and variable costs and inventories are allowed). A Cournot competition is played in each time period (market) with each player deciding the quantity of product to place on it. The market price of that product in each time period depends on the total quantity placed in the market.

In this talk we discuss some algorithmic results regarding the complexity of finding a Nash equilibrium for this game. It is based on joint work with Margarida Carvalho, João Pedro Pedroso and Mathieu Van Vyve.

**2017-08-11:**

**Author: ** Daniel Quiroz, U Chile.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, November 8, 14:30.

**Title:** Colouring with conditions on distances

**Abstract:**

A natural generalisation of the chromatic number comes from requiring vertices to have a different colour if they have distance at most p. In this context, it is clear that upper bounds on the number of colours depend on the maximum degree of the graph. A lot of work has been done on this topic, in particular around Wegner’s conjecture for planar graphs and distance 2. The situation changes if we require vertices to have a different colour if they have distance exactly p. For that type of colouring, the answer to the question “how many colours are needed?”, depends greatly on the parity of p. In the talk I will give an overview of the work that has been done on these types of colouring, and will discuss in more detail our new results on colouring at distance exactly p. The talk is based on joint work with Jan van den Heuvel and H.A. Kierstead.

**2017-11-10:**

**Author: ** Andrés Cristi, U Chile.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, October 11, 14:30.

**Title:** A Near Optimal Mechanism for Energy Aware Scheduling

**Abstract:**

Consider a cloud server, where clients can submit jobs for processing. The quality of service that each agent receives is given by a private non-decreasing function of the completion time of her job. The server has to process the jobs and charge each agent while trying to optimize the social cost that is defined as the energy expenditure plus the sum of the values of the internal cost functions. The server operator would like to design a mechanism in order to optimize this objective, which ideally is computationally tractable, charges the users “fairly” and the induced game has an equilibrium. We present a mechanism that combines the aforementioned properties with a constant Price of Anarchy. An interesting feature of our mechanism is that it is indirect: each user needs only to declare an upper bound on the completion time of her job, and not the cost function.

**2017-04-10:**

**Author: ** José Verschae, PUC.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, October 02, 14:30.

**Title:** Symmetry and hierarchies in approximation algorithms: some open problems

**Abstract:**

One of the most important tools for obtaining good approximation algorithms is based on linear and semi-definite relaxations. Different “hierarchies” have appeared in the literature that try to automatically yield best possible polynomial-size relaxations. Recently, many examples have appeared showing that these techniques perform poorly in the presence of symmetries. In this talk I will first present some general techniques for dealing with symmetries based on computational group theory. Then I will introduce the basics of the Sherali-Adams hierarchy and state some open problems that need to be addressed in order to apply group theoretical techniques in the context of the Sherali-Adams hierarchy.

**2017-27-09:**

**Author: ** Martin Matamala, DIM, U Chile.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, September 27, 14:30.

**Title:** An extension of (perfect) matching

**Abstract:**

Consider a finite graph where each vertex v is colored with a color c(v).

The new structure, call it X-(perfect) matching, is an edge coloring

(not necessarily proper) s.t. each vertex v has at most (exactly)

one incident edge colored with color c(v).

When all vertices have the same color, then a X-(perfect) matching is

a (perfect) matching of the graph.

**2017-13-09:**

**Author: ** Antoine Hochart, U Chile

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, September 13, 14:30.

**Title:** Ergodicity of zero-sum stochastic games

**Abstract:**

Some problems related to zero-sum stochastic games (e.g. existence and characterization of the asymptotic value) can be studied by analyzing their dynamic programming operators, so called Shapley operators. In particular, when some optimality equation is solvable, then the asymptotic value exists and is independent of the initial state.

In the zero-player case (with a finite state space), that is, for finite Markov processes with rewards, the ergodicity of the Markov chain entails the solvability of the latter equation, and is characterized by several equivalent assertions. We show that most of these characterizations transpose to the two-player case.

**2017-06-09:**

**Author: ** José Fuentes, DCC, U Chile

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, September 06, 14:30.

**Title:** Fast and Compact Planar Embeddings

**Abstract:**

There are many representations of planar graphs, but few are as elegant as

Turán’s (1984): it is simple and practical, uses only 4 bits per edge, can handle

self-loops and multi-edges, and can store any specified embedding. Its main

disadvantage has been that it does not allow efficient searching (Jacobson,

1989). In this work we show how to add a sublinear number of bits to Turán’s

representation such that it supports fast navigation while retaining simplicity.

As a consequence of the inherited simplicity, we offer the first efficient parallel

construction of a compact encoding of a planar graph embedding. Our experi-

mental results show that the resulting representation uses about 6 bits per edge

in practice, supports basic navigation operations within a few microseconds,

and can be built sequentially at a rate below 1 microsecond per edge, featuring

a linear speedup with a parallel efficiency around 50% for large datasets.

**2017-30-08:**

**Author: ** Miquel Oliu Barton, Université Paris-Dauphine.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, August 30, 14:30.

**Title:** The unknown stochastic game

**Abstract:**

Stochastic games are the simplest model of dynamic games, introduced by Shapley in 1953: to every state corresponds a game, and the transition from state to state is jointly controlled by the players. But what do the players know about the parameters of the model?

**2017-23-08:**

**Author: ** Jannik Matuschke, TU Munich.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, August 23, 14:30.

**Title:** The Complexity of Computing a Robust Flow

**Abstract:**

Robust network flows are a concept for dealing with uncertainty and unforeseen failures in the network infrastructure. One of the most basic models is the Maximum Robust Flow Problem: Given a network and an integer k, the task is to find a path flow of maximum robust value, i.e., the guaranteed value of surviving flow after removal of any k arcs in the network. The complexity of this problem appeared to have been settled almost a decade ago: Aneja et al. (2001) showed that the problem can be solved efficiently when k = 1, while an article by Du and Chandrasekaran (2007) established that the problem is NP-hard for any constant value of k larger than 1.

We point to a flaw in the proof of the latter result, leaving the complexity for constant k open once again. For the case that k is not bounded by a constant, we present a new hardness proof, establishing that the problem is strongly NP-hard, even when only two different capacity values occur and the number of paths is polynomial in the size of the input. We further show that computing optimal integral solutions is already NP-hard for k = 2 (whereas for k=1, an efficient algorithm is known) and give a positive result for the case that capacities are in {1, 2}.

This is joint work with Yann Disser (TU Darmstadt).

**2017-16-08:**

**Author: ** Sebastian Berndt, U Lubeck.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, August 16, 14:30.

**Title:** The PACE challenge: practical algorithms for tree width

**Abstract:**

While the theoretical aspects concerning the computation of tree width -

one of the most important graph parameters – are well understood, it is

not clear how it can be computed practically. As tree width has a wide

range of applications, e.g. in bioinformatics or artificial

intelligence, this lack of understanding hinders the applicability of

many important algorithms in the real world. The Parameterized

Algorithms and Computational Experiments (PACE) challenge therefore

chose the computation of tree width as one of its challenge problems in

2016 and again in 2017. In 2016, Hisao Tamaki (Meiji University)

presented a new algorithm that outperformed the other approaches (SAT

solver and branch-and-bound) by far. Although no theoretical performance

guarantees for his algorithm are known, it was so dominating that all

participants in 2017 used a variant of Tamaki’s algorithm. This allowed

the winning implementation due to Larisch (King-Abdullah University of

Science and Engineering) and Salfelder (University of Leeds) to solve

over 50% of the test suite (containing graphs with over 3500 nodes) in

under six seconds. Before PACE 2016, no algorithm was known to reliably

compute tree width on graphs with about 100 nodes. Acknowledging this

fact, Tamaki’s paper on his algorithm won the best paper award in track

B of ESA 2017.

In this talk, I will first give a gentle introduction to tree width,

parameterized algorithms and the PACE challenge. I will then present a

streamlined version of Tamaki’s algorithm based on a variant of the

well-studied cops-and-robber game. This allows us to produce hard

instances for the algorithm, derive the first known upper bound on its

running time and give a plausible conjecture explaining its practical

performance.

This talk is based on joint work with Max Bannach (University of

Luebeck).

**2017-09-08:**

**Author: ** Marc Schroder, U Chile.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, August 09, 14:30.

**Title:** Network congestion games are robust for variable demand

**Abstract:**

Network congestion games have provided a fertile ground for the algorithmic game theory community. Indeed, many of the pioneering works on bounding the efficiency of equilibria use this framework as their starting point. In recent years, there has been an increased interest in studying randomness in this context, with a particular focus on understanding what happens when link latencies are subject to random shocks. We assume that the source of randomness is the inherent variability of the demand that most practical networks suffer from. Therefore, in this paper we look at the basic nonatomic network congestion game with the additional feature that demand is random. The first obstacle we have to sort out is the definition of equilibrium, as the classic concept of Wardrop equilibrium needs to be extended to the random demand setting. Interestingly, Wang, Doan, and Chen (2014), by considering an equilibrium notion in which flow particles evaluate their expected cost using the full knowledge of the demand distribution, conclude that the price of anarchy of the game can be arbitrarily large. In contrast, our main result in this paper is that under a very natural equilibrium notion, in which the basic behavioral assumption is that users evaluate their expected cost according to the demand they experience in the system, the price of anarchy of the game is actually the same as that in the deterministic demand game.

**2017-02-08:**

**Author: ** Patricio Foncea, U Chile.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, August 02, 14:30.

**Title:** Prophet Inequalities and applications in Mechanism Design

**Abstract:**

The classic prophet inequality states that, when faced to a finite sequence of non-negative independent random variables, a gambler who knows their distribution and is allowed to stop the sequence at any time, can obtain, in expectation, at least half as much reward as a prophet who knows the values of each random variable and can choose the largest one. In this work we study a modified version of the previous problem, where the random variables are presented in uniform random order and the stop rule must be decided before hand (i.e., the gambler cannot adapt its strategy). We prove that a factor of 1-1/e is achievable with respect to the expected maximum, and that this value is tight.

As an application of these results, we provide a reduction to posted price mechanisms, where an auction takes place with customers arriving sequentially and being offered a take-it-or-leave-it price for the item. We prove that if such mechanism is not allow to adapt the prices offered and the customers arrive in random order, then its expected revenue guarantees the same tight bound with respect to the expected revenue of the optimal auction.

**2017-12-07:**

**Author: ** Tjark Vredeveld, U Maastricht.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, July 12, 14:30.

**Title:** Using stochastic dominance in the comparison of online algorithms.

**Abstract:**

The standard way to assess the quality of online algorithms is competitive analysis. However, as it is a worst-case performance measure, it has been criticised as being too pessimistic or not able to discriminate between different algorithms. A good example for this is the paging problem, in which all so-called marking algorithms are k-competitive (where k is the number of pages in the fast memory).

Several variants have been proposed to deal with this criticism. In this talk, I will discuss the method of stochastic dominance to compare online algorithms. For example, for the paging algorithm, we have shown that under some assumptions of locality of reference, an online algorithm known as Least Recently Used outperforms any other online algorithm.

This talk is based on joint work with Benjamin Hiller (Zuse Institute Berlin).

**2017-21-06:**

**Author: ** Fabio Botler, DII, U Chile.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, June 21, 14:30.

**Title:** Two conjectures on decompositions of graphs into paths and cycles.

**Abstract:**

A path (resp. cycle) decomposition of a graph G is a set of edge-disjoint paths (resp. cycles) of G that covers the edge-set of G. Gallai (1966) conjectured that every graph on n vertices admits a path decomposition of size at most (n + 1)/2, and Hajós (1968) conjectured that every Eulerian graph on n vertices admits a cycle decomposition of size at most (n − 1)/2. Although these conjectures seem similar, they were verified for distinct classes of graphs. For example, Gallai’s Conjecture was verified for graphs with no cycle composed only by even degree vertices, while Hajós’ Conjecture was verified for graphs with maximum degree at most 4, and for planar graphs. In this talk, I will present a technique that allowed us to verify both conjectures for some (new and old) classes of graphs. We verify both Gallai’s and Hajós’ Conjectures for series–parallel graphs; for graphs with maximum degree 4; and for graphs with treewidth at most 3. Moreover, we show that the only graphs in these classes that do not admit a path decomposition of size at most n/2 are isomorphic to K_3 , K_5 or K_5 – e.

**2017-14-06:**

**Author: ** Sergio Rica, UAI Physics Center, UAI.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, June 14, 14:30.

**Title:** Thermodynamics of small systems through a reversible and conservative discrete automaton.

**Abstract:**

The focus of this talk will be the Q2R model which is a reversible and conservative cellular automaton. The Q2R model possesses quite a rich and complex dynamics. Indeed, the configuration space is composed of a huge number of cycles with exponentially long periods, that we attempt to characterize. Furthermore, a coarse-graining approach is applied to the time series of the total magnetization, leading to a master equation that governs the macroscopic irreversible dynamics of the Q2R automata. The methodology is replicated for various system sizes. In the case of small systems, we show that the master equation leads to a tractable probability transfer matrix of moderate size, which provides a consistent and tractable thermodynamic description. This work is in collaboration with Felipe Urbina and Marco Montalva.

**2017-07-06:**

**Author: ** Pedro Montealegre, UAI.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, June 7, 14:30.

**Title:** Graph Reconstruction in the Congested Clique

**Abstract:**

The congested clique model is a message-passing model of distributed computation where the underlying communication network is the complete graph of $n$ nodes. In this talk we consider the situation where the joint input to the nodes is an $n$-node labeled graph $G$, i.e., the local input received by each node is a line of the adjacency matrix of $G$. Nodes execute an algorithm, communicating with each other in synchronous rounds and their goal is to compute some function that depends on $G$.

The most difficult problem we could attempt to solve is the reconstruction problem, where nodes are asked to recover all the edges of the input graph $G$. Formally, given a class of graphs $\mathcal{G}$, the problem is defined as follows: if $G$ is not in $\mathcal{G}$, then every node must reject; on the other hand, if $G$ belongs to $\mathcal{G}$, then every node must end up knowing all the edges of $G$. It is not difficult to see that the total number of bits received by a node through one link is at least $\Omega(\log|\mathcal{G}_n|/n)$, where $\mathcal{G}_n$ is the subclass of all $n$-node labeled graphs in $\mathcal G$.

In this talk we prove that previous bound is tight and that it is possible to achieve it with only two rounds. Moreover, it this bound can be archeived for hereditary classes of graphs in one round. This result recover all known algorithms concerning the reconstruction of graph classes in one round and bandwidth $\cO(\log n)$: forests, planar graphs, cographs, etc. But we also get new one-round algorithms for other hereditary graph classes such as unit disc graphs, interval graphs, etc.

**2017-31-05:**

**Author: ** Diego Arroyuelo, Informatics Department, UTFSM.

**Where:** Republica 779B, Sala P3, 3rd floor..

**When:** Wednesday, May 31, 14:30.

**Title:** Data-aware measures for the dictionary problem.

**Abstract:** Given a sorted set S of size n from a universe U = {1, 2, …, u}, the dictionary problem consists of constructing a data structure for S such that the following queries are supported efficiently: rank (which given a universe element x, counts the number of elements in S that are smaller or equal than x), select (which given a value j, obtains the j-th element in the sorted set S), and member (which test whether a given element belongs to S or not).

Dictionaries are a fundamental building block for many applications in computer science (compressed data structures, information retrieval, text compression), biology (compression of biological sequences), and physical sciences (data collection from particle colliders), among others. Many of these applications are data-intensive, hence the compressed representation of dictionaries becomes crucial.

There exist several compression models for dictionaries, each model defining a compression measure. Such a measure is simply a formula that indicates the minimum amount of bits one could achieve for the representation of set S using that model. In particular, data-aware measures are those that take advantage of the distribution of the n elements in the set to improve compression.

This talk will survey the best known data-aware measures for dictionary compression, as well as the corresponding data structures to support operations rank, select, and member. This will be followed by a discussion about the need of new data-aware measures that achieve better compression in certain circumstances that are of practical relevance. The talk will end proposing a preliminary definition of such a measure, and then a discussion on the consequences of this definition.

**2017-24-05:**

**Author: ** Retsef Levi, MIT Sloan School of Management.

**Where:** Beauchef Poniente (new building), Torre Oriente, 4th Floor, Sala Seminario DII.

**When:** Wednesday, May 24, 14:30.

**Title:** Exploration vs. Exploitation: Reducing Uncertainty in Operational Problems

**Abstract:** Motivated by several core operational applications, we introduce a new class of multistage stochastic optimization models that capture a fundamental tradeoff between performing work and making decisions under uncertainty (exploitation) and investing capacity (and time) to reduce the uncertainty in the decision making (exploration). Unlike existing models, in which the exploration-exploitation tradeoffs typically relate to learning the underlying distributions, the models we introduce assume a known probabilistic characterization of the uncertainty, and focus on the tradeoff of learning (or partially learning) the exact realizations.

For several interesting scheduling models we derive insightful structural results on the optimal policies that lead not only to quantification of the value of learning, but also obtain surprising optimal local decision rules for when it is optimal to explore (learn).

The talk is based on several papers that are joint work with Chen Attias, Tom Magnanti, Robi Krauthgamer and Yaron Shaposhnik.

**2017-17-05:**

**Author: ** Carlos Ochoa, DCC, U Chile.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, May 17, 14:30.

**Title:** Synergistic Solutions on MultiSets

**Abstract:** Karp et al. (1988) described Deferred Data Structures for Multisets as “lazy” data structures which partially sort data to support online rank and select queries, with the minimum amount of work in the worst case over instances of size $n$ and number of queries $q$ fixed. Barbay et al. (2016) refined this approach to take advantage of the gaps between the positions hit by the queries (i.e., the structure in the queries). We develop new techniques in order to further refine this approach and take advantage all at once of the structure (i.e., the multiplicities of the elements), some notions of local order (i.e., the number and sizes of runs) and global order (i.e., the number and positions of existing pivots) in the input; and of the structure and order in the sequence of queries. Our main result is a synergistic deferred data structure which outperforms all solutions in the comparison model that take advantage of only a subset of these features. As intermediate results, we describe two new synergistic sorting algorithms, which take advantage of some notions of structure and order (local and global) in the input, improving upon previous results which take advantage only of the structure (Munro and Spira 1979) or of the local order (Takaoka 1997) in the input; and one new multiselection algorithm which takes advantage of not only the order and structure in the input, but also of the structure in the queries.

**2017-03-05:**

**Author: ** Patricio Poblete, DCC, U Chile.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, May 03, 14:30.

**Title:** Robin Hood Hashing really has constant average search cost and variance in full tables

**Abstract:** Thirty years ago, the Robin Hood collision resolution strategy was introduced for open addressing hash tables, and a recurrence equation was found for the distribution of its search cost. Although this recurrence could not be solved analytically, it allowed for numerical computations that, remarkably, suggested that the variance of the search cost approached a value of $1.883$ when the table was full. Furthermore, by using a non-standard mean-centered search algorithm, this would imply that searches could be performed in expected constant time even in a full table.

In spite of the time elapsed since these observations were made, no progress has been made in proving them. In this paper we introduce a technique to work around the intractability of the recurrence equation by solving instead an associated differential equation. While this does not provide an exact solution, it is sufficiently powerful to prove a bound for the variance, and thus obtain a proof that the variance of Robin Hood is bounded by a small constant for load factors arbitrarily close to 1. As a corollary, this proves that the mean-centered search algorithm runs in expected constant time.

We also use this technique to study the performance of Robin Hood hash tables under a long sequence of insertions and deletions, where deletions are implemented by marking elements as deleted. We prove that, in this case, the variance is bounded by 1/(1-alpha)+O(1), where alpha is the load factor.

To model the behavior of these hash tables, we use a unified approach that we apply also to study the First-Come-First-Served and Last-Come-First-Served collision resolution disciplines, both with and without deletions.

**2017-26-04:**

**Author: ** Pablo Marquet, PUC.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, April 24, 14:30.

**Title:** What will be the impact of climate change on biodiversity (us included): An open and hot problem.

**2017-12-04:**

**Author: ** Marc Schroder, U Chile.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, April 12, 14:30.

**Title:** Claim games for estate division problems

**Abstract:** The estate division problem, also known as bankruptcy problem, concerns the issue of dividing an estate among a group of claimants when the sum of entitlements exceeds the size of the estate. This problem was formally introduced by O’Neill (1982), after which most of the literature focused on comparing different solution rules by means of their properties. We approach the problem strategically and analyse the claim game that is initiated by O’Neill (1982).

**2017-04-04:**

**Author: ** Jie Han, Universidade de Sao Paulo.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, April 05, 14:30.

**Title:** Perfect Matchings in Hypergraphs

**Abstract:** The problem of determining the minimum d-degree threshold for finding a perfect matching in k-uniform hypergraphs has attracted much attention in the last decade. It is also closely related to the Erdös Matching Conjecture back to 1960s. We will introduce the problem, survey the existing results, and mention some recent progress.

**2017-29-03:**

**Author: ** Kevin Schewior, U Chile.

**Where:** Republica 779B, Sala P3, 3rd floor.

**When:** Wednesday, March 29, 14:30.

**Title:** Chasing Convex Bodies

**Abstract:** We consider the following online problem in d-dimensional Euclidean space. The server initially located in the origin receives an online sequence of convex bodies. In response to each body, the task is to move to a point within that body so as to minimize the total moved distance. We evaluate the performance of online algorithms by competitive analysis. This problem was first considered by Friedman and Linial in 1993 and is an interesting special case of the very general online problem of metrical task systems. Friedman and Linial gave a rather involved constant-competitive algorithm for d=2.

We first look at different greedy policies and notice that they are not constant-competitive even when d=2 and all convex bodies are lines. We then develop a nevertheless simple new algorithm for this special case. Applying a sequence of known and new reductions, we are able to extend this result to a 2^O(d)-competitive algorithm when d is arbitrary and all convex bodies are affine subspaces of the full space. This is the first constant-competitive algorithm in this setting with fixed d>2.

Finally, we discuss directions for future research: Closing the gap between 2^O(d) and the simple lower bound sqrt(d), considering general convex-body chasing, and considering convex-function chasing which is an intermediate problem between convex-body chasing and metrical task systems.

**2017-22-03:**

**Author: ** Krzysztof Fleszar, U Chile.

**Where:** Republica 779B, Sala P3, tercer piso.

**When:** Wednesday, March 22, 14:30.

**Title:** Maximum Disjoint Paths: New Algorithms based on Tree-Likeness

**Abstract:** Maximum Edge Disjoint Paths is a classical NP-hard problem of finding a

maximum-size subset from a given set of k terminal pairs that can be

routed via edge-disjoint paths.

One of the big open problems in approximation algorithms is to close the

gap between the best known approximation upper bound of $\sqrt{n}$

(Chekuri et al. (2006)) and the best known lower bound of $2^{\sqrt{\log

n}}$ (Chuzhoy et al. (2016)). In their seminal paper, Raghavan and

Thompson (Combinatorica, 1987) introduce the technique of randomized

rounding for LPs; their technique gives an O(1)-approximation when edges

may be used by $O(\log n / \log\log n)$ paths.

In this talk, I introduce the problem and present two of our algorithms

(ESA 2016) that strengthen the fundamental results above. They provide

new bounds formulated in terms of the feedback vertex set number r of a

graph, which measures its vertex deletion distance to a forest.

- An $O(\sqrt{r} \log{kr})}$-approximation algorithm. Up to a

logarithmic factor, it strengthens the best known ratio $\sqrt{n}$ due

to Chekuri et al., as $r \le n$.

- An $O(1)$-approximation algorithm with congestion bounded by

$O(\log{kr} / \log\log{kr})$, strengthening the bound obtained by the

classic approach of Raghavan and Thompson.

At the end, an open problem will be stated.

**2017-15-03:**

**Author: ** Saeed Hadikanloo, Univ. de Paris 9.

**Where:** Republica 779B, Sala P3, tercer piso.

**When:** Wednesday, March 15, 14:30.

**Title:** Learning in NonAtomic Anonymous Games: Application to First Order Mean Field Games

**Abstract:**

We introduce a model of anonymous games where the actions are chosen from possibly player dependent sets. We propose several learning procedures similar to the well-known Fictitious Play and Online Mirror Descent and prove their convergence to equilibrium under the classical monotonicity condition. Typical examples are First Order Mean Field Games.

**Previous Years**