Weighted sparse and low-rank least squares approximation

Nom de l'orateur
Philipp Trunschke
Etablissement de l'orateur
LMJL
Date et heure de l'exposé
Lieu de l'exposé
Salle des séminaires

Many functions of interest exhibit weighted summability of the coefficients with respect to some dictionary of basis functions. This property allows us to derive close-to-optimal best n-term approximation rates by means of a new weighted version of Stechkin’s Lemma. It is well known that these best n-term approximations can be estimated efficiently from samples. In fact, highly refined sample complexity results guarantee quasi-best n-term approximations with high probability. But the algorithm that perform this approximation have a time complexity that is linear in the size of the dictionary of basis functions. This is problematic, since this size typically grows rapidly when the dimension of the sought function increases. To bypass this issue, we propose to encode the coefficients in a simultaneously sparse and low-rank tensor format. Compared to a direct sparse approximation, the resulting sparse tensors trees have the advantage that we can use a large (tensor product) dictionary with a manageable time complexity. We transfer the close-to-optimal sample complexity results of the classical weighted sparse vectors to our new model class of sparse and low-rank tensors. For this we utilise a novel result that allows us to transfer the restricted isometry property from one set to another that is sufficiently close to the first one.