Benjamin Guedj (Inria) : Rethinking Generalisation: Beyond KL with Geometry and Comparators

Séminaire « Probabilités et Statistique »
Amphi Turing

Generalisation is arguably one of the central problems in machine learning and foundational AI. Generalisation theory has traditionally relied on KL-based PAC-Bayesian bounds, which, despite their elegance, often obscure geometry and limit applicability. In this talk, I will present recent advances that move beyond traditional bounds. One line of work replaces KL with Wasserstein distances, yielding high-probability bounds valid for heavy-tailed losses and leading to new, optimisable learning objectives. Another line introduces a general comparator framework, showing how optimal bounds naturally arise from convex conjugates of cumulant generating functions, unifying and extending many classical results. Together, these perspectives highlight how rethinking divergences and comparators opens new directions in both theory and practice. I will conclude by discussing links with information theory and how these ideas might shape the next generation of PAC-Bayesian learning algorithms.

References:
arxiv.org/abs/2310.10534,
arxiv.org/abs/2306.04375,
arxiv.org/abs/2309.04381