Abstract
In this talk, we investigate sampling strategies for approximation of functions by weighted least-squares. Although a quasilinear sampling budget can already be achieved by iid random draws according to the adapted density [3], further reductions of the needed number of points are possible [2, 4], achieving provable performance estimates thanks to the solution to the Kadison-Singer problem [6]. However this involves a subsampling step, which is not algorithmically tractable. We show how greedy sampling methods [1, 5] can circumvent this defect, while attaining optimal sample sizes.
References
[1] J. Batson, D. A. Spielman, and N. Srivastava. “Twice-ramanujan spar- sifiers”. In: SIAM Journal on Computing 41.6 (2012).
[2] A. Cohen and M. Dolbeault. “Optimal pointwise sampling for L2 ap- proximation”. In: Journal of Complexity 68 (2022), p. 101602.
[3] A. Cohen and G. Migliorati. “Optimal weighted least-squares methods”. In: SMAI J. Comput. Math. 3 (2017), pp. 181–203.
[4] C. Haberstich, A. Nouy, and G. Perrin. “Boosted optimal weighted least- squares”. In: Math. Comp. 91.335 (2022), pp. 1281–1315.
[5] Y. T. Lee and H. Sun. “Constructing linear-sized spectral sparsification in almost-linear time”. In: SIAM J. Comput. 47.6 (2018), pp. 2315–2336.
[6] A. W. Marcus, D. A. Spielman, and N. Srivastava. “Interlacing families II: Mixed characteristic polynomials and the Kadison-Singer problem”. In: Ann. of Math. (2) 182.1 (2015), pp. 327–350.