Seminars 2014


Author:  Cesar Hidalgo, MIT Media Lab.

Where: Sala B204, 16.00-17.00.

Title: Macro Connections.
Abstract:  The rise of computational methods has generated a new natural resource. That new natural resource is data. While it is not clear if Big Data will open up trillion dollar markets, what it is clear is that making sense of visualizations are essential meaning out of. The capacity to create data visualizations, however, is not widespread. To help develop this capacity I have been working on the creation of Data Vizualization Engines, which are tools that allow people quickly visualize any portion of a large dataset and construct visual narratives from which they can draw insight. In this talk I will present five big data visualization engines we created at the MIT Media Lab’s Macro Connections group and will show how to use them to improve our understanding of the development of economies, cultures and cities. The data visualization engines I will demo include (i) the Observatory of Economic Complexity (, which is the most comprehensive tool for exploring international trade data to date; (ii) DataViva (, which is a tool we created to open up data for the entire formal sector economy of Brazil, including data on all of the working force, municipalities, industries, and occupations of Brazil; (iii) Pantheon (, a dataset and visualization engine we created to explore global patterns of cultural production; (iv) Immersion (, a tool that inverts the email interface, by focusing it on people rather than messages; and (v) Place Pulse and StreetScore (, which are crowd-sourcing and machine learning tools we have developed to help understand the aesthetic aspects of cities and their evolution.


Author:  Sebastiano Vigna, University of Milano.

Where: Sala 33, DII, 14.30-15.30.

Title: Axioms for centrality.
Abstract:  Given a social network, which of its nodes are more central? This question has been asked many times in sociology, psychology and computer science, and a whole plethora of centrality measures (a.k.a. centrality indices, or rankings) were proposed to account for the importance of the nodes of a network. In this paper, we try to provide a mathematically sound survey of the most important classic centrality measures known from the literature and propose an axiomatic approach to establish whether they are actually doing what they have been designed for. Our axioms suggest some simple, basic properties that a centrality measure should exhibit.

Surprisingly, only a new simple measure based on distances, harmonic centrality, turns out to satisfy all axioms; essentially, harmonic centrality is a correction to Bavelas’s classic closeness centrality designed to take unreachable nodes into account in a natural way.

As a sanity check, we examine in turn each measure under the lens of information retrieval, leveraging state-of-the-art knowledge in the discipline to measure the effectiveness of the various indices in locating web pages that are relevant to a query. While there are some examples of such comparisons in the literature, here for the first time we also take into consideration centrality measures based on distances, such as closeness, in an information-retrieval setting. The results closely match the data we gathered using our axiomatic approach.

Our results suggest that centrality measures based on distances, which in the last years have been neglected in information retrieval in favor of spectral centrality measures, do provide high-quality signals; moreover, harmonic centrality pops up as an excellent general-purpose centrality index for arbitrary directed graphs.


Author:  Julián Mestre, University of Sydney.

Where: Sala 33, DII, 14.30-15.30.

Title: How unsplittable-flow-covering helps scheduling with job-dependent cost functions.
Abstract:  Generalizing many well-known and natural scheduling problems, scheduling with job-specific cost functions has gained a lot of attention recently. In this setting, each job incurs a cost depending on its completion time, given by a private cost function, and one seeks to schedule the jobs to minimize the total sum of these costs. The framework captures many important scheduling objectives such as weighted flow time or weighted tardiness. Still, the general case as well as the mentioned special cases are far from being very well understood yet, even for only one machine.

Aiming for a better general understanding of this problem, in this work we focus on the case of uniform job release dates on one machine for which the state of the art is a 4-approximation algorithm. This is true even for a special case that is equivalent to the covering version of the well-studied and prominent unsplittable flow on a path problem, which is interesting in its own right. For that covering problem, we present a quasi-polynomial time (1 + \epsilon)-approximation algorithm that yields an (e + \epsilon)-approximation for the above scheduling problem. Moreover, for the latter we devise the best possible resource augmentation result regarding speed: a polynomial time algorithm which computes a solution with optimal cost at 1 + \epsilon speedup. Finally, we present an elegant QPTAS for the special case where the cost functions of the jobs fall into at most logn many classes. This algorithm allows the jobs even to have up to \log n many distinct release dates.

(This is joint work with Weibke Hohn and Andreas Wiese)


Author:  Jean-Bernard Baillon, Paris 1.

Where: Sala 33, DII, 14.30-15.30.

Title: About numerical sequences.
Abstract:  What has optimization to say about the convergence of real valued sequences? A hot topic!


Author:  Javier Martínez de Albéniz, U. Barcelona.

Where: Sala 33, DII, 14.30-15.30.

Title: On the nucleolus of the assignment game.
Abstract:  A matrix A defines an assignment market, where each row represents a buyer and each column a seller. If buyer i is matched with seller j, the market produces aij units of utility.
In this framework of bilateral assignment games, we analyze the nucleolus of an assignment game and give the conditions to ensure that a given vector is the nucleolus of some assignment game.
We show that assignment matrices that give rise to the same nucleolus form a join-semilattice with one maximal element, which is always a valuation assignment matrix, i.e. any 2 x 2 submatrix has two optimal matchings, and give some properties.
From a practical point of view, we give a rule to compute the nucleolus of a valuation assignment game.


Author:  Jeremy Barbay, DCC, U. Chile.

Where: Sala 33, DII, 14.30-15.30.

Title: Deferred Data Structures: lazy and (online 1-)competitive.
Abstract:  Deferred Data Structures are designed for cases where data arrives or is
modified too fast to be indexed in real time, and is queried sporadically
(both in time and in space). Deferred Data Structures supporting queries
for basic algorithmic problems as =Sorting= (Select queries), =Maxima=
(Dominating queries) and =Convex Hull= (Convex Edge queries) will be among
the building blocks for queries on complex data, such as produced in
Astronomy, in the study of Social Networks or in Wearable computing.

In this talk I will present some simple Deferred Data Structures to
maintain a partially sorted array while answering =rank=, =search= and
=select= queries and supporting =insert= and =delete= dynamic operators,
and the competitive analysis of its complexity in an online setting.

I will describe a subset of the latest results on this topic:
In internal memory, a data structure which uses space within $O(N)$ for
$N$ elements and supports =select= queries in $1$-competitive time with
respect to offline optimal, =rank= and =search= in similar time, and
=insert= and =delete= in optimal number of comparison, and similar
amortized time. In external memory, a data structure which supports
=select= queries in $O(1)$-competitive time with respect to offline
optimal, =rank= and =search= in similar time, and =insert= and =delete= in
optimal number of comparison. In practice, some experiments on
synthetic data show that those data-structures performs fewer comparisons
than the best-known offline solutions.


Author:  Marco Scarsini, LUISS, Rome.

Where: Sala 33, DII, 14.30-15.30.

Title: Hotelling games in networks.
Abstract:  We consider a Hotelling game where a finite number of retailers
choose a location, given that their potential customers are
distributed on a network. Retailers do not compete on price but only
on location, therefore each consumer shops at the closest store. We
show that when the number of retailers is large enough, the game
admits a pure Nash equilibrium and we construct it. We then compare
the equilibrium cost bore by the consumers with the cost that could be
achieved if the retailers followed the dictate of a benevolent
planner. We perform this comparison in term of the induced price of
anarchy, i.e., the ratio of the worst equilibrium cost and the optimal
cost, and the induced price of stability, i.e., the ratio of the best
equilibrium cost and the optimal cost. We show that, asymptotically in
the number of retailers, these ratios are two and one, respectively.


Author:  Miquel Oliu-Barton, Paris Dauphine.

Where: Sala 33, DII, 14.30-15.30.

Title: Zero-sum dynamic games: two open problems
Abstract:  1. Stochastic games model dynamic interactions in which the environment changes in response to the behavior of the players. These games were introduced in Shapley (1953), who proved the existence of the discounted value and of optimal stationary strategies in two-player zero-sum games with finite state and action spaces. In this case, a stochastic game is a finite set of matrices (states) together with a transition probability for each entry of each matrix. That is, the states evolve during the game following a two-controlled Markov chain. The convergence of the values, as the discount factor tends to 0, was obtained by Bewley-Kohlberg (1976) using Tarski-Seidenberg elimination theorem from real algebraic geometry, and I recently provided a new, more elementary proof (2013). However, very little is known about the limit values, which remains an important, open problem.
2. Infinitely repeated games with incomplete information model the strategic use of information in multi-stage games. They were introduced by Aumann and Maschler (1968), who proved the existence of the value for the case of incomplete information on one side and its lack of existence in the general case. Mertens and Zamir (1977) proved that the limit of the values of the finitely repeated games, as the number of stages tends to infinity, exist and provided a characterization. Asymptotically optimal strategies were finally obtained by Heuer (1992). However, very little is known about the way the players shall disclose their private information in these games which remains, too, a challenging open problem.


Author:  Nicolas Schabanel, CNRS & Paris 7.

Where: Sala 33, DII, 14.30-15.30.

Title: Stochastic Cellular Automata: Correlations, Decidability and Simulations.
Abstract:  This paper introduces a simple formalism for dealing with deterministic, non-
deterministic and stochastic cellular automata in an unified and composable manner. This formalism
allows for local probabilistic correlations, a feature which is not present in usual definitions. We
show that this feature allows for strictly more behaviors (for instance, number conserving stochastic
cellular automata require these local probabilistic correlations). We also show that several problems
which are deceptively simple in the usual definitions, become undecidable when we allow for local
probabilistic correlations, even in dimension one. Armed with this formalism, we extend the notion
of intrinsic simulation between deterministic cellular automata, to the non-deterministic and stochastic settings. Although the intrinsic simulation relation is shown to become undecidable in dimension two and higher, we provide explicit tools to prove or disprove the existence of such a simulation between any two given stochastic cellular automata. Those tools rely upon a characterization of equality of stochastic global maps, shown to be equivalent to the existence of a stochastic coupling between the random sources. We apply them to prove that there is no universal stochastic cellular automaton. Yet we provide stochastic cellular automata achieving optimal partial universality, as well as a universal non-deterministic cellular automaton.


Author:  Luciano Grippo, UNGS, BA.

Where: Sala 33, DII, 14.30-15.30.

Title:  Particiones de grafos en conjuntos convexos.
Abstract:  TBA


Author:  Ruy Fabila-Monroy, CINVESTAV, IPN.

Where: Sala 33, DII, 14.30-15.30.

Title: Cubiertas (n,m) de esferas.

Abstract: En esta charla hablaremos del problema combinatorio de cuántos conjuntos abiertos hacen falta para cubrir la esfera d-dimensional de manera que: ningún conjunto contenga un par de puntos antipodales; todo punto de la esfera esté cubierto n veces y todo punto del hemisferio norte esté cubierto m veces.

Trabajo conjunto con Imre Bárány y Birgit Vogtenhuber.


Author:  Maya Stein, DIM and CMM.

Where: Sala 33, DII, 14.30-15.30.

Title: Monochromatic path/cycle partitions

Abstract: A conjecture of Gyárfás says that given any colouring with r colours of the edges of the complete graph K_n on n vertices, there are r disjoint monochromatic paths that induce a partition of V(K_n). The conjecture is true for r\leq 3. Replacing paths with cycles, it is known that in general, the number of cycles needed is greater than r, but can be bounded by a function of r. (Here, single vertices/edges count as cycles.) For r=2, it is known that 2 paths/cycles suffice.

This talk gives an overview on the history and then describes some recent results for bipartite and multipartite graphs, with fixed values of r. We also study variants of the problem for r-local colourings, and for r-mean colourings. These are joint results with Conlon, with Lang, with Lang and Schaudt, and with Schaudt.


Author:  Hugo Meric, INRIA/NIC-Labs.

Where: Sala 33, DII, 14.30-15.30.

Title: On the computation of optimal rate for broadcasting systems.

Abstract: Modern satellite communication systems mainly rely on time sharing to optimize the throughput. Each receiver uses the channel during a given fraction of time. During this period, the transmission parameters (i.e., the modulation and the coding rate) are chosen in order to transmit as much information as possible. The scheme is easy to implement which explains its popularity. However, it is today well established that time sharing is not optimal in terms of throughput offered to the receivers. Indeed, the scheme that consists in sending superposed data (on the same frequency) offers better performance than the time sharing. Hierarchical modulation is a practical implementation of superposition coding. In a broadcast system, it enables the source to communicate with several receivers on the same frequency and at the same time. In my work, I am interested in offering the best throughput to the receivers when the system implements hierarchical and non-hierarchical modulations. We will see that this problem is equivalent to compute the intersection of the convex hull of a set of points in high dimension with a line. I will introduce a first algorithm that solves (in theory) the problem using quadratic optimization. Then I will discuss another possible solution and the implementation issues.


Author:  Luca Castelli, École Polytechnique.

Where: Sala seminarios, DIM, 16.15-17.15.

Title: Schnyder woods for higher genus surfaces: from graph encoding
to graph drawing

Abstract:  Planar graphs play a fundamental role in computer science, discrete
mathematics and related fields. Among their properties, a deep and nice
characterization is the one provided in terms of edge orientations, now
referred to as Schnyder woods. W. Schnyder introduced a new
combinatorial structure to define a new planarity criterion: he proved
that a graph is planar if and only if its incidence poset (incident
relations between vertices and edges) has dimension at most 3. Schnyder
woods have led to a surprisingly large number of applications (in graph
drawing, bijective enumeration, random sampling, graph encoding, contact
representations, greedy routing). Unfortunately, these structures are
originally only defined for planar graphs. This talk focuses on the
recent generalizations of Schnyder woods to the case of higher genus
surfaces, and its algorithmic applications in the domains of graph
encoding and graph drawing.


Author:  Jorge Vera, PUC.

Where: Sala 33, DII, 14.30-15.30.

Title: Algunas ideas algorítmicas “nuevas” en problemas de optimización estocástica.
Abstract: En esta charla presentaré algunas preguntas que hemos estado estudiando relacionadas a algoritmos en problemas de Optimización Estocástica. El origen de estas preguntas se encuentra en la modelación de un problemas de toma decisiones en el cual hay interacción entre decisiones de mediano y corto plazo. Esto ha sido modelado mediante Optimización Estocástica de dos etapas. Si bien métodos de solución para problemas de este tipo han sido ampliamente estudiados, algunos desarrollos recientes en Métodos de Primer Orden (nuevas versiones de “gradiente estocástico”, por ejemplo) nos han llevado a probar algunas de estas ideas en el problemas anterior. Los métodos de primer orden tienden a ser lentos y por esa razón algunos de los desarrollos interesantes de los últimos años han sido diversos esquemas de “aceleración”. Discutiré algunos de los resultados existentes y las preguntas que tenemos en relación a un posible nuevo esquema de aceleración para algoritmos de este tipo.

Este es trabajo conjunto con Alfonso Lobos y Robert Freund.


Author:  José Soto, U Chile.

Where: Sala 33, DII, 14.30-15.30.

Title: Getting a better than 1/2-approximation for MAXCUT without semidefinite programming.
Abstract:  TBA

2014-07-17 (Jueves!):

Author:  William J. Cook, U Waterloo.

Where: Sala 33, DII, 14.30-15.30.

Title: Parallel Implementation of the Cutting-Plane Method.
Abstract:  The cutting-plane method is one of the most successful techniques for the practical solution of problems in integer programming and combinatorial optimization. The method is inherently sequential, moving through a sequence of linear-programming solutions.  We will discuss ideas for parallelizing the method, using the traveling salesman problem as an example.


Author:  Bart de Keijzer, Sapienza U.

Where: Sala 33, DII, 14.30-15.30.

Title: Mechanism design for a class of positive congestion games with a simple form of externalities.
Abstract:  This talk is about my favorite open problem of my PhD thesis. In some work that I did together with Guido Schaefer, we consider a class of games that we call ¨congestion games with 2-externalities¨. In these games there is a set of facilities and a set of players. Each player has to choose a facility, and the utility of a player goes up as more players choose the same facility (contrary to the regular definition of congestion games). In particular, the utility of a player i that chooses facility e is the sum of a nonnegative value v_ie and values v_ije, for all players j that choose the same facility. The values v_ije are referred to as 2-externalities. We consider the optimization problem of finding the assignment of players to facilities such that the total utility is maximized. This is NP-hard, but there exists a polytime 2-approximation algorithm for the general case.

The open problem is to obtain (or prove non-existence of) a polytime constant approximation algorithm in a mechanism design setting, i.e. the setting where the input externalities are reported by the players themselves, so that they will lie about their externalities if it would give them a better utility. This additional difficulty implies that we want our algorithm to be ¨truthful¨, which means that the algorithm has to satisfy that misreporting is guaranteed to get the player a lower utility (in expectation) than it would have gotten otherwise. This problem has been considered in independent work of Abraham, Babaioff, Dughmi, and Roughgarden, where the roles of players and facilities are switched around, which results in a class of combinatorial auctions. This is essentially the same problem, and I am going to talk about their progress on this problem, as well as our own progress.


Author:  Pablo Pérez Lantero, U. de Valparaíso.

Where: Sala 33, DII, 14.30-15.30.

Title: The area of the convex hull of stochastic points.
Abstract:  The computation of the convex hull of a finite point set in the plane is a fundamental problem in computational geometry. Recently, computational geometry problems considering uncertain geometric data are of interest, appearing from scenarios in which the gathered data has many false positives, data precision errors, etc. Let P be a finite point set in the plane, where each point p of P is assigned a rational probability prob(p). We study computation problems on the random variable Area(S), which denotes the area of the convex hull of a random sample S of P in which each point p of P is included in S independently with probability prob(p). We show that:
(1) The expectation of Area(S) can be computed in polynomial time.
(2) Given a value w > 0, computing the probability Pr[Area(S) >= w] is NP-hard.
(3) Given a value w > 0 and a fixed eps < 1, in polynomial time a probability A can be computed so that Pr[Area(S) >= w] <= A <= Pr[Area(S) >= (1 - eps)w].

Joint work with Sergio Cabello (FMF, University of Ljubljana).



Author:  Mona Rahn, CWI.

Where: Auditorium de República 779, primer piso, 12:00-13:00.

Title: Bounding the inefficiency of altruism through social contribution games.
Abstract:  We introduce a new class of games, called social contribution games (SCGs), where each player’s individual cost is equal to the cost he induces on society because of his presence. Our results reveal that SCGs constitute useful abstractions of altruistic games when it comes to the analysis of the robust price of anarchy. We first show that SCGs are altruism-independently smooth, i.e., the robust price of anarchy of these games remains the same under arbitrary altruistic extensions. We then devise a general reduction technique that enables us to reduce the problem of establishing smoothness for an altruistic extension of a base game to a corresponding SCG. Our reduction applies whenever the base game relates to a canonical SCG by satisfying a simple social contribution boundedness property. As it turns out, several well-known games satisfy this property and are thus amenable to our reduction technique. Examples include min-sum scheduling games, congestion games, second price auctions and valid utility games. Using our technique, we derive mostly tight bounds on the robust price of anarchy of their altruistic extensions. For the majority of the mentioned game classes, the results extend to the more differentiated friendship setting.



Author: Richard Lang, U. de Chile.

Where: Sala 33, DII, 14.30-15.30

Title: Monochromatic vertex partitions
Abstract:  Given a graph G whose edges are coloured arbitrarily with r colours, how
many monochromatic paths are needed to partition the vertices of G? In
particular, for which classes of graphs can we give an answer, that does
not depend on the order of G? The same questions have been asked for
monochromatic cycles.

Most of the work on this topic focuses on complete graphs, but there are
also results on complete bipartite graphs and graphs with fixed
independence number. There are some natural lower bounds for the number
of monochromatic paths/cycles needed and in many cases these are
conjectured to be sharp. However to this day these conjectures have only
been settled for small r. The most impressive results on this field have
been obtained by using an asymptotic approach involving the regularity
lemma, which has become a standard technique of proof.

In this talk I will give an overview of the conjectures and recent
results on monochromatic path and cycle partitions of complete and
bipartite graphs. Further I will illustrate the application of the
regularity lemma, by outlining a proof of a new result (in joint work
with Maya Stein and Oliver Schaudt) concerning monochromatic cycle
partitions of complete bipartite graphs.



Author: Rodrigo Carrasco, U. Adolfo Ibáñez.

Where: Sala 33, DII, 14.30-15.30

Title: Single Machine Scheduling with Job-Dependent Convex Cost and Arbitrary Precedence Constraints
Abstract:  In this work we combine resource augmentation and alpha-point scheduling techniques to compute approximate solutions for a general family of scheduling problems: each job has a convex non-decreasing cost function applied to its completion time and the goal is to compute a schedule that minimizes the total cost subject to precedence constraints. We show that our algorithm is a O(1)-speed O(1)-approximation algorithm and our numerical experiments show that the speed-scaling ratio needed is actually close to 1.



Author: Neil Olver, VU Amsterdam & CWI.

Where: Sala 33, DII, 14.30-15.30

Title: Steiner tree relaxations
Abstract:  The Steiner tree problem – find the cheapest way of connecting some subset of the vertices of a given graph – is a fundamental one in network design. A recent breakthrough of Byrka et al. gives a 1.39 approximation, and is based on a particular, very large linear program relaxation, the “hypergraphic” relaxation. On the other hand, the much simpler and more tractable bidirected cut relaxation remains very poorly understood. I will discuss these LP relaxations, and some recent work (joint with Andreas Feldman, Jochen Konemann and Laura Sanita) which shows equivalence between the bidirected and hypergraphic relaxations for a certain class of instances. The hope is that this may be a step towards a nontrivial approximation algorithm based on the bidirected relaxation.


Author: Alex Shapiro, Georgia Tech, USA

Where: Auditorium de República 779, primer piso, 12:00-13:30.

Title: Risk neutral and risk averse approaches to multistage stochastic programming.
Abstract: In many practical situations one has to make decisions sequentially based on data available at the time of the decision and facing uncertainty of the future. This leads to optimization problems which can be formulated in a framework of multistage stochastic programming. In this talk we consider risk neutral and risk averse approaches to multistage stochastic programming. We discuss conceptual and computational issues involved in formulation and solving such problems. As an example we give numerical results based on the Stochastic Dual Dynamic Programming method applied to planning of the Brazilian interconnected power system.



Author: Andrew Goodall, Computer Sciences Institute at Charles University, Prague.

Where: Sala 33, DII, 14.30-15.30

Title: Graph polynomials from simple graph sequences
Abstract: The chromatic polynomial evaluated at a positive integer n is equal to the number of homomorphisms to the complete graph on n vertices. Many important graph polynomials are likewise determined by counting homomorphisms to a sequence of (multi)graphs, such as the Tutte polynomial and the independence polynomial.  We give a powerful construction that produces graph polynomials in this way from sequences of simple graphs. It uses just two fundamental operations (disjoint union and interpretations of relational structures) and starts from a simple set of building blocks (transitive tournaments with added unary relations). This will be illustrated by numerous examples, some of whose combinatorial properties have been well explored (such as that of the chromatic polynomial), but most of which are new.



Author: Carla Rafols, Ruhr Universität Bochum

Where: Sala de seminarios del CMM, 16:15-17:15

Title: Hard-core predicates and list decoding
Abstract:  A hard-core predicate P of a one-way function f is a predicate of the input of f which is not leaked by f. There are many classical results regarding hard-core predicates of the most common cryptographically useful one-way functions such as RSA, discrete exponentiation or the Rabin function. A classical result by Goldreich and Levin states that every one-way function has a hard-core predicate.

At FOCS 2003, Akavia, Goldwasser and Safra gave a coding theoretic interpretation of Goldreich and Levin’s result and an elegant list decoding methodology to prove that a predicate is a hard-core of a one-way function. Additionally, they showed how to use the methodology to get a simpler proof of many classical works on hard-core predicates, although their result did not apply to important results like the one that says that the bits in the middle of RSA, Rabin or the discrete exponentiation problem are hard-core (Håstad and Näslund, 1996).

In this talk I will explain this connection between list decoding and hard-core predicates of Akavia et al. I will also explain how, in joint work with Paz Morillo, we extended their results to the middle bits of these functions. The talk is aimed at a general public interested in theoretical computer science (not necessarily cryptographically savvy).



Author: Guido Schaefer, CWI.

Where: Sala 33, DII, 14.30-15.30

Title:  Optimal tolls with arc restrictions and heterogeneous players

Abstract: The problem of computing optimal network tolls that induce a Nash equilibrium of minimum total cost has been studied intensively in the literature, but mostly under the assumption that these tolls are unrestricted. Here we consider this problem under the more realistic assumption that the tolls have to respect some given upper bound restrictions on the arcs. Our main focus is on the heterogeneous player case, i.e., players may have different sensitivities to tolls. This case gives rise to new challenges in devising algorithms to compute optimal restricted tolls. We will review some recent results for the restricted network toll problem and discuss some open problems.



Author: José Correa, Universidad de Chile.

Where: Sala 33, DII, 14.30-15.30

Title:  Optimizando el valor presente de un proyecto

Abstract: Un proyecto consiste en un conjunto de actividades con una duracion y valor determinado, mas un grafo de precedencias entre las actividades. En esta charla discutiremos el problema de maximizar el valor presente del proyecto. Veremos como escribir el problema como un problema de optimizacion lineal, discutiremos su complejidad, e intentaremos dilucidar si es que existe un algoritmo fuertemente polinomial.



Author: Martin Loebl,Charles University, Republica Checa.

Where: Sala Seminarios CMM, 16:15-17:15

Title: Is there a natural graph function that distinguishe non-isomorphic graphs?

Abstract: We discuss this question and suggest a possible strategy for ‘YES’. This is based partly on joint work with Martin Klazar and Iain Moffatt, and partly on joint work with Jean-Sebastien Sereni.



Author: Gonzalo Navarro, Universidad de Chile.

Where: Sala 33, DII, 14.30-15.30

Title: Optimal Encodings for Range Majority Queries

Abstract: We face the problem of designing a data structure that can report all \tau-majorities within any range of an array A[1,n], without storing A. A \tau-majority in a range A[i,j], for 0<\tau< 1, is an element that occurs more than \tau(j-i+1) times in A[i,j]. We show that \Omega(n\log(1/\tau)) bits are necessary for such a data structure. Then, we design a structure using O(n\log(1/\tau)) bits that answers queries in O((1/\tau)\log\log_w(1/\tau)\log n) time, on a RAM machine with word size w.



Author: Kevin Perrot, Universidad de Chile.

Where: Sala 33, DII, 14.30-15.30

Title: (Kadanoff) Sandpiles

Abstract: Sandpile models are a subclass of Cellular Automata. Bak et al. introduced them in 1987 for they exemplify the nicely named notion of “Self-Organized Criticality”. Those non-linear discrete dynamical systems illustrate the evolution of cubic sand grains from nicely packed columns to nicely packed columns. A simple local rule is applied until reaching a stable configuration… It all looks very easy so far, but we will see that wonderful (and mysterious!) objects naturally appear. What can we understand about them? I will present some studies we (with Eric Rémila and Enrico Formenti) have initiated via the Kadanoff Sandpile model.



Author: Luis Briceño, Universidad Técnica Federico Santa María.

Where: Sala 33, DII, 14.30-15.30

Title: Splitting algorithms for computing Nash equilibria in zero-sum games

Abstract: In this talk we present a new method for computing a Nash equilibrium in zero-sum games in Hilbert spaces. In the finite dimensional case (finite number of actions), linear programming methods are used to compute such equilibrium with modest performance if the number of actions is too large. Other methods in the literature need the computation onto the simplex at each iteration, which is not an easy task either in finite or in infinite dimensions (compact set of actions).

We present a method which avoids the computation of the projection onto the simplex by splitting it in projections onto two suitable closed vectorial spaces, whose computations are very simple in finite or infinite dimension. The method does not need sub-iterations and converges weakly to a Nash equilibrium of the game. This splitting of the simplex opens some questions on some possible improvements in computing the projection onto a simplex.



Author: Jannik Matuschke, Universidad de Chile and Technical University of Berlin, Germany.

Where: Sala 33, DII, 14.30-15.30

Title: Optimizing fare inspection strategies in public transportation networks

Abstract: We study several variants of a bilevel optimization problem that models the effect of fare inspections in public transportation networks. On the first level, the leader determines on each edge the probability of controlling passing passengers subject to a global budget constraint. On the second level, the passengers optimize their routes according to the given controlling probabilities on the edges.

Regarding the passengers’ behavior, we study an adaptive variant, in which passengers opt for the shortest path to their destination after encountering an inspector, and a non-adaptive variant, in which they continue along the previously chosen path. We derive several exact and approximative algorithms for these two problems as well as a tight bound of 3/4 on the ratio of the optimal cost between adaptive and non-adaptive strategies.

For the leader’s optimization problem, we devise an LP-based approximation algorithm and a local search procedure that shifts inspection probabilities within an initially determined support set. We demonstrate the applicability of our approach by conducting a computational study on instances of the Dutch railway network, revealing that the computed solutions are within 95% of the corresponding upper bounds.

This is joint work with José R. Correa, Tobias Harks, and Vincent J.C. Kreuzen.



Author: Oliver Schaudt, University of Cologne, Germany.

Where: Sala 33, DII, 14.30-15.30

Title: Frankl’s conjecture

Abstract: Frankl’s conjecture (1979) says that in a finite family of sets, closed under union, there is an element contained in at least half of the sets. Although simple-sounding, the conjecture is considered as one of the tough problems in combinatorics. In this talk I will sketch the history of the conjecture and the most successful approaches so far. Joint work with Henning Bruhn, Pierre Charbit, and Jan-Arne Telle.



Author: Nicole Megow, Max-Planck-Institut für Informatik and TU Berlin, Germany.

Where: Sala 33, DII, 14.30-15.30

Title: Scheduling resources of varying speed

Abstract: In various computation and production environments appear scheduling problems in which the processing speed of resources may vary over time. We distinguish mainly two types of varying-speed scenarios: one in which the speed function is static and cannot be influenced, and another dynamic setting in which deciding upon the processor speed is part of the scheduling problem. In both cases, we aim for cost-efficient scheduling algorithms.

In this talk, we present algorithms and performance analyses for both models. Assuming a known speed function we show a nearly optimal polynomial-time algorithm (a PTAS) for scheduling on a single machine. This result relies on dual schedules, a re-interpretation of the problem within the well-known two-dimensional Gantt chart. Assuming unreliable resources with unknown speed functions, we construct a universal solution that obtains for any speed function a schedule of cost within a factor 4 of the optimal solution. It is somewhat surprising that such a universal solution always exists. Finally, we give structural and algorithmic insights for the dynamic problem variant, where we are interested in the tradeoff between scheduling cost and speed-changing cost.



Author: Nicolas Schabanel, (CNRS, U. Paris Diderot).

Where: Sala 33, DII, 14.30-15.30

Title: Facility Location in Evolving Metrics

Abstract: Understanding the dynamics of evolving social or infrastructure networks is a challenge in applied areas such as epidemiology, viral marketing, or urban planning. During the past decade, data has been collected on such networks but has yet to be fully analyzed. We propose to use information on the dynamics of the data to find stable partitions of the network into groups. For that purpose, we introduce a time-dependent, dynamic version of the facility location problem, that includes a switching cost when a client’s assignment changes from one facility to another. This might provide a better representation of an evolving network, emphasizing the abrupt change of relationships between subjects rather than the continuous evolution of the underlying network. We show that in realistic examples this model yields indeed better fitting solutions than optimizing every snapshot independently. We present an O(\log nT)-approximation algorithm and a matching hardness result, where n is the number of clients and T the number of time steps. We also give an other algorithms with approximation ratio O(\log nT) for the variant where one pays at each time step (leasing) for each open facility.

Joint work with David Eisenstat (U. Brown) and Claire Mathieu (CNRS, ENS Paris)



Author: Anupam Gupta, Carnegie Mellon University (USA).

Where: Sala 33, DII, 14.30-16.00

Title: Sparsest Cut in Bounded Treewidth Graphs

Abstract:We consider approximation algorithms for the sparsest cut graph partitioning problem. Here, given graphs G with demand pairs \{s_i,t_i\}, the goal is to separate G into two parts to minimize the ratio of the number of edges cut to the number of demand pairs separated.

In this talk, I will sketch
- the best hardness-of-approximation (based on P!=NP) currently known for this problem,
- and a 2-approximation algorithm with running time n^{O(k)}, where k is the treewidth of the underlying graph G. Our algorithm rounds a Sherali-Adams LP relaxation.

This positive result can be complemented by (a) an integrality gap of a factor of 2 for the Sherali-Adams hierarchy even after polynomially many rounds, and (b) an unique-games hardness of a factor of 2. Time permitting, I will give a high-level intuition of how the NP-hardness can be extended to prove these matching results.

I will try to keep the talk as self-contained as possible.

This is joint work with Kunal Talwar (Microsoft Research SVC) and David Witmer (CMU), and appeared at the STOC 2013 conference.