Department of Mathematics,
University of California San Diego
****************************
Math 278B - Mathematics of Information, Data, and Signals Seminar
Tino Ullrich
TU Chemnitz
A New Subsampling Technique for Random Points and Optimal Least Squares Approximation of High-Dimensional Functions
Abstract:
We provide a new general upper bound for the minimal L2-worst-case recovery error in the framework of RKHS, where only n function samples are allowed. This quantity can be bounded in terms of the singular numbers of the compact embedding into the space of square integrable functions. It turns out that in many relevant situations this quantity is asymptotically only worse by square root of log(n) compared to the singular numbers. The algorithm which realizes this behavior is a weighted least squares algorithm based on a specific set of sampling nodes which works for the whole class of functions simultaneously. These points are constructed out of a random draw with respect to distribution tailored to the spectral properties of the reproducing kernel (importance sampling) in combination with a sub-sampling procedure coming from the celebrated proof of Weaver's conjecture, which was shown to be equivalent to the Kadison-Singer problem. For the above multivariate setting, it is still a fundamental open problem whether sampling algorithms are as powerful as algorithms allowing general linear information like Fourier or wavelet coefficients. However, the gap is now rather small. As a consequence, we may study well-known scenarios where it was widely believed that sparse grid sampling recovery methods perform optimally. It turns out that this is not the case for dimensions d greater than 2. \\ \\ This is joint work with N. Nagel and M. Schaefer from TU Chemnitz.
Host: Rayan Saab
February 4, 2021
10:30 AM
Zoom link: https://msu.zoom.us/j/96421373881 (passcode: first prime number greater than 100)
****************************