About this Seminar

Deep learning techniques are increasingly applied to scientific problems where the precision of networks is crucial. Despite being deemed as universal function approximators, neural networks, in practice, struggle to reduce prediction errors below a certain threshold even with large network sizes and extended training iterations. We developed an algorithm to tackle this issue and demonstrate that the prediction error from multi-stage training can nearly reach the machine precision of double-floating point (Wang and Lai, JCP 2024). This mitigates neural networks’ spectral bias, a known issue for multiscale problems. I will discuss two examples where neural networks are used to solve multiscale inverse problems. The first example concerns the search for singularities in the Euler equations (Wang-Lai-Gómez-Serrano-Buckmaster, PRL 2023). The second leverages large-scale Earth observations to uncover constitutive models for polar ice (Wang et al., Science 2025). Both involve discoveries beyond previously known results.

Biography:
Ching-Yao Lai (Yao) is an Assistant Professor in the Department of Geophysics and an Affiliated Faculty of the Institute for Computational and Mathematical Engineering (ICME) at Stanford. Before joining Stanford, she was an Assistant Professor at Princeton University. She received an undergraduate degree (2013) in Physics from National Taiwan University and a PhD (2018) in Mechanical and Aerospace Engineering from Princeton University. She completed her postdoctoral research at Columbia University where she received the Lamont Postdoctoral Fellowship. Her current research focuses on enhancing the representation of machine-learning models to tackle multiscale problems. She was the recipient of the 2023 Google Research Scholar Award and the 2024 Sloan Research Fellowship. (Source)

Seminar Details
Seminar Date
Thursday, May 15, 2025
12:00 PM - 1:00 PM
Status
Happening As Scheduled