We consider the problem of learning a graph modeling the statistical relations of the d variables of a dataset with n samples. Standard approaches amount to searching for a precision matrix representative of a Gaussian graphical model that adequately explains the data. However, most maximum likelihood-based estimators usually require storing the d^2 values of the empirical covariance matrix, which can become prohibitive in a high-dimensional setting.
In this talk, we adopt a “compressive” viewpoint and aim to estimate a sparse precision matrix from a sketch of the data, i.e., a low-dimensional vector of size m≪d^2 carefully designed from the data using nonlinear random features (e.g., rank-one projections). Under certain spectral assumptions, we show that it is possible to recover the precision matrix from a sketch of size m=Ω((d+2k)log(d)), where k is the maximal number of edges of the underlying graph. These information-theoretic guarantees are inspired by the compressed sensing theory. We investigate the possibility of achieving practical recovery with an iterative algorithm based on the graphical lasso, viewed as a specific denoiser. We compare our approach and the graphical lasso on synthetic datasets, demonstrating its favorable performance even when the dataset is compressed.
Joint work with : Titouan Vayer, Rémi Gribonval and Paulo Gonçalves.
Compressive Recovery of Sparse Precision Matrices
Title - HTML
Compressive Recovery of Sparse Precision Matrices
- Se connecter pour publier des commentaires
Nom de l'orateur
Etienne Lassalle
Etablissement de l'orateur
LS2N
Date et heure de l'exposé
01-04-2025 - 11:00:00
Lieu de l'exposé
Salles des Séminaires
Résumé de l'exposé
comments