A recent line of research on deep learning shows that the training of extremely wide neural networks can be characterized by a kernel function called neural tangent kernel (NTK). However, it is known that this type of result does not perfectly match the practice, as NTK-based analysis requires the network weights to stay very close to their initialization throughout training, and cannot handle regularizers or gradient noises. In this talk, I will present a generalized neural tangent kernel analysis and show that noisy gradient descent with weight decay can still exhibit a ``kernel-like'' behavior. This implies that the training loss converges linearly up to a certain accuracy. I will also discuss the generalization error of an infinitely wide two-layer neural network trained by noisy gradient descent with weight decay.
8月14日
11:00am - 12:00pm
地點
https://hkust.zoom.us/j/5616960008
講者/表演者
Dr. Yuan CAO
UCLA
主辦單位
Department of Mathematics
聯絡方法
mathseminar@ust.hk
付款詳情
對象
Alumni, Faculty and Staff, PG Students, UG Students
語言
英語
其他活動
3月24日
研討會, 演講, 講座
IAS / School of Science Joint Lecture - Pushing the Limit of Nonlinear Vibrational Spectroscopy for Molecular Surfaces/Interfaces Studies
Abstract Surfaces and interfaces are ubiquitous in Nature. Sum-frequency generation vibrational spectroscopy (SFG-VS) is a powerful surface/interface selective and sub-monolayer sensitive spect...
11月22日
研討會, 演講, 講座
IAS / School of Science Joint Lecture - Leveraging Protein Dynamics Memory with Machine Learning to Advance Drug Design: From Antibiotics to Targeted Protein Degradation
Abstract Protein dynamics are fundamental to protein function and encode complex biomolecular mechanisms. Although Markov state models have made it possible to capture long-timescale protein co...