Séminaire de mathématiques appliquées (archives)

Lionel Riou-Durand
Etablissement de l'orateur
University of Warwick
Date et heure de l'exposé
Lieu de l'exposé
Salle des séminaires
Résumé de l'exposé

Sampling approximations for high dimensional statistical models often rely on so-called gradient-based MCMC algorithms. It is now well established that these samplers scale better with the dimension than other state of the art MCMC samplers, but are also more sensitive to tuning [5]. Among these, Hamiltonian Monte Carlo is a widely used sampling method shown to achieve gold standard d^{​​​​​1/4}​​​​​ scaling with respect to the dimension [1]. However it is also known that its efficiency is quite sensible to the choice of integration time, see e.g. [4], [2]. This problem is related to periodicity in the autocorrelations induced by the deterministic trajectories of Hamiltonian dynamics. To tackle this issue, we develop a robust alternative to HMC built upon Langevin diffusions (namely Metropolis Adjusted Langevin Trajectories, or MALT), inducing randomness in the trajectories through a continuous refreshment of the velocities. We study the optimal scaling problem for MALT and recover the d^{​​​​​1/4}​​​​​ scaling of HMC proven in [1] without additional assumptions. Furthermore we highlight the fact that autocorrelations for MALT can be controlled by a uniform and monotonous bound thanks to the randomness induced in the trajectories, and therefore achieves robustness to tuning. Finally, we compare our approach to Randomized HMC ([2], [3]) and establish quantitative contraction rates for the 2-Wasserstein distance that support the choice of Langevin dynamics.

This is a joint work with Jure Vogrinc (University of Warwick)

Raed Blel
Etablissement de l'orateur
ENPC
Date et heure de l'exposé
Lieu de l'exposé
Salle des séminaires
Résumé de l'exposé

The main focus of this article is to provide a mathematical study of the algorithm proposed in [6] where the authors proposed a variance reduction technique for the computation of parameter-dependent expectations using a reduced basis paradigm. We study the effect of Monte-Carlo sampling on the the- oretical properties of greedy algorithms. In particular, using concentration inequalities for the empirical measure in Wasserstein distance proved in [14], we provide sufficient conditions on the number of samples used for the computation of empirical variances at each iteration of the greedy procedure to guarantee that the resulting method algorithm is a weak greedy algorithm with high probability. These theoretical results are not fully practical and we therefore propose a heuristic procedure to choose the number of Monte-Carlo samples at each iteration, inspired from this theoretical study, which provides satisfactory results on several numerical test cases.

Camilla Fiorini
Etablissement de l'orateur
M2N, Conservatoire National des Arts et Métiers
Date et heure de l'exposé
Lieu de l'exposé
Salle des séminaires
Résumé de l'exposé

In this work we consider the surface quasi-geostrophic (SQG) system under location uncertainty (LU) and propose a Milstein-type scheme for these equations, which is then used in a multi-step method. The LU framework, is based on the decomposition of the Lagrangian velocity into two components: a large-scale smooth component and a small-scale stochastic one. This decomposition leads to a stochastic transport operator, and one can, in turn, derive the stochastic LU version of every classical fluid-dynamics system.

SQG in particular consists of one partial differential equation, which models the stochastic transport of the buoyancy, and an operator which relies the velocity and the buoyancy.

For this kinds of equations, the Euler-Maruyama scheme converges with weak order 1 and strong order 0.5. Our aim is to develop higher order schemes in time: the first step is to consider Milstein scheme, which improves the strong convergence to the order 1.

Michael Fanuel
Etablissement de l'orateur
Université de Lille
Date et heure de l'exposé
Lieu de l'exposé
Résumé de l'exposé

Determinantal Point Processes (DPPs) elegantly model repulsive point patterns. A natural problem is the estimation of a DPP given a few samples. Parametric and nonparametric inference methods have been studied in the finite case, i.e. when the point patterns are sampled in a finite ground set. In the continuous case, several parametric methods have been proposed but nonparametric methods have received little attention. In this talk, we discuss a nonparametric approach for continuous DPP estimation leveraging recent advances in kernel methods. We show that a restricted version of this maximum likelihood (MLE) problem falls within the scope of a recent representer theorem for nonnegative functions in a Reproducing Kernel Hilbert Space. This leads to a finite-dimensional problem, with strong statistical ties to the original MLE.

Reference: https://arxiv.org/pdf/2106.14210.pdf

Alexandre Poulain
Etablissement de l'orateur
Simula Research Laboratory, Oslo
Date et heure de l'exposé
Lieu de l'exposé
Zoom Planet
Résumé de l'exposé

The Cahn-Hilliard equation, arising from physics, describes the phase separation occurring in a material during a sudden cooling process and is the subject of many pieces of research [2]. An interesting application of this equation is its capacity to model cell populations undergoing attraction and repulsion effects. For this application, we consider a variant of the Cahn-Hilliard equation with a single-well potential and a degenerate mobility. This particular form introduces numerous di culties especially for numerical simulations. We propose a relaxation of the equation to tackle these issues and analyze the resulting system. Interestingly, this relaxed version of the degenerate Cahn-Hilliard equation bears some similarity with a nonlinear Keller-Segel model. We also describe a simple  nite element scheme that preserves the critical physical (or biological) properties using an upwind approach.

Titouan Vayer
Etablissement de l'orateur
ENS Lyon
Date et heure de l'exposé
Lieu de l'exposé
Salle de séminaire
Résumé de l'exposé

Abstract: Nowadays large-scale machine learning faces a number of fundamental computational challenges, triggered by the high dimensionality of modern data and the increasing availability of very large training collections. These data can also be of a very complex nature, such as such as those described by the graphs that are integral to many application areas. In this talk I will present some solutions to these problems. I will introduce the Compressive Statistical Learning (CSL) theory, a general framework for resource-efficient large scale learning in which the training data is summarized in a small single vector (called sketch) that captures the information relevant to the learning task. We will show how Optimal Transport (OT) can help us establish statistical guarantees for this type of learning problem. I will also show how OT can allow us to obtain efficient representations of structured data, thanks to the Gromov-Wasserstein distance. I will address concrete learning tasks on graphs such as online graph subspace estimation and tracking, graphs partitioning, clustering and completion.

Josselin Massot
Etablissement de l'orateur
IRMAR - Université de Rennes 1
Date et heure de l'exposé
Lieu de l'exposé
Salle Eole
Résumé de l'exposé

Dans cet exposé, nous étudierons un plasma électronique où les particules peuvent être distribuées en deux populations distinctes, froides et chaudes, menant au modèle de Vlasov-Maxwell hybride fluide/cinétique linéarisé, restreint ici à 1 dimension en espace et 3 en vitesse. Notre objectif sera de proposer deux méthodes numériques pour résoudre ce modèle. La première est basée sur la structure hamiltonienne du système, et la seconde utilise un intégrateur exponentielle (ou méthode de Lawson), permettant facilement de monter en ordre en retirant une contrainte de stabilité provenant de la partie linéaire du problème. Nous étudierons ensuite la possibilité d'approximer l'exponentielle de la partie linéaire lorsqu'il n'est pas possible de déterminer celle-ci formellement. L'erreur relative et le temps de calcul des différentes méthodes de simulation nous permettra de les comparer dans un contexte multidimensionnel.

TBA
Etablissement de l'orateur
LMJL
Date et heure de l'exposé
Lieu de l'exposé
Résumé de l'exposé

TBA

Philipp Trunschke
Etablissement de l'orateur
Technische Universität Berlin
Date et heure de l'exposé
Lieu de l'exposé
Zoom
Résumé de l'exposé

We consider best approximation problems in a nonlinear subset of a Banach space of functions. The norm is assumed to be a generalization of the L2-norm for which only a weighted Monte Carlo estimate can be computed. We establish error bounds for the empirical best approximation error in this general setting and use these bounds to derive a new, sample efficient algorithm for the model set of low-rank tensors. The viability of this algorithm is demonstrated by recovering quantities of interest for a classical random partial differential equation.

Claire Chanais
Etablissement de l'orateur
Université Lille 1, Laboratoire Paul Painlevé
Date et heure de l'exposé
Lieu de l'exposé
Salle des séminaires
Résumé de l'exposé

Dans cet exposé, j’introduirai un modèle mathématique de corrosion d’acier dans des conditions de stockage géologique. Après un historique des travaux d’analyse mathématique et numérique réalisés sur ce modèle depuis une dizaine d’années, je détaillerai un résultat récent d’existence de solutions de type onde progressive par une preuve assistée par ordinateur. Ce travail a été réalisé en collaboration avec Maxime Breden (Ecole Polytechnique) et Antoine Zurek (TU Vienna). Pour finir, je présenterai des perspectives d’évolution du modèle de corrosion.