05 Mar 2021 04:00 PM Athens via Zoom
Game theory, min-max optimization and modern machine learning
Modern machine learning (ML) has seen a recent surge of “adversarial” models and formulations. The term implies the presence of multiple competing objectives in a problem formulation, often discussed in game-theoretic terms. Some such approaches, like generative adversarial networks (GANs), adversarial examples, domain adaptation, and multi-agent reinforcement learning have led to significant breakthroughs in their respective questions. Yet, the ML community has largely used numerical methods designed for single-objective optimization, leading to added challenges and problems. Compared to single-objective optimization, game dynamics is more complex and less understood.
In this talk, I will begin with another round of very recent exciting applications of game-theoretic formulations in ML, mentioning ideas of fairness, performativity and causality. Then I will provide an overview of work done with my group and collaborators on numerical methods and fundamental limits for some tractable classes of smooth game-theoretic formulations relevant to ML. We will discuss the effect of momentum and alternating updates, and debunk a persistent myth on why the original GAN formulation doesn’t work (it does when we get the dynamics right). In the end, I will outline some ongoing projects and summarize interesting questions worth exploring.
About the Speaker
Ioannis Mitliagkas is an assistant professor in the department of Computer Science and Operations Research (DIRO) at the University of Montréal, member of MILA and Canadian CIFAR AI chair holder. Before that, he was a Postdoctoral Scholar with the departments of Statistics and Computer Science at Stanford University. He obtained his Ph.D. from the department of Electrical and Computer Engineering at The University of Texas at Austin and engineering diploma from the Technical University of Crete, Greece. His research includes topics in optimization, dynamics and learning, with a focus on modern machine learning. His work currently focuses on the intersection of ML, game theory and applications in causal inference, domain adaptation and generative models. He has also done work on MCMC methods, efficient large-scale and distributed algorithms, and topics of generalization and domain adaptation. In the past he has worked on high-dimensional streaming problems and fast algorithms and computation for large graph problems.
You can download the slides of the presentation