The Department of Mathematical Sciences will host Dr. Grant Rotskoff of Stanford University, for its math seminar series. Dr. Rotskoff will present, “Convergence properties of shallow neural networks: implications and applications in scientific computing.” This free seminar will take place on Friday, April 1, at 11 a.m. in BSB 132. A virtual option is also available: https://tinyurl.com/9nrnveur.
Abstract
The surprising flexibility and undeniable empirical success of machine learning algorithms have inspired many theoretical explanations for the efficacy of neural networks. Here, I will briefly introduce one perspective that provides not only asymptotic guarantees of trainability and accuracy in high-dimensional learning problems but also provides some prescriptions and design principles for learning. Bolstered by the favorable scaling of these algorithms in high dimensional problems, I will turn to the problem of variational high dimensional PDEs. From the perspective of an applied mathematician, these problems often appear hopeless; they are not only high-dimensional but also dominated by rare events. However, with neural networks in the toolkit, at least the dimensionality is somewhat less intimidating. I will describe an algorithm that combines stochastic gradient descent with importance sampling to optimize a function representation of the solution. Finally, I will provide numerical evidence of the power and limitations of this approach.