A recent line of research on deep learning shows that the training of extremely wide neural networks can be characterized by a kernel function called neural tangent kernel (NTK). However, it is known that this type of result does not perfectly match the practice, as NTK-based analysis requires the network weights to stay very close to their initialization throughout training, and cannot handle regularizers or gradient noises. In this talk, I will present a generalized neural tangent kernel analysis and show that noisy gradient descent with weight decay can still exhibit a ``kernel-like'' behavior. This implies that the training loss converges linearly up to a certain accuracy. I will also discuss the generalization error of an infinitely wide two-layer neural network trained by noisy gradient descent with weight decay.
8月14日
11:00am - 12:00pm
地点
https://hkust.zoom.us/j/5616960008
讲者/表演者
Dr. Yuan CAO
UCLA
主办单位
Department of Mathematics
联系方法
mathseminar@ust.hk
付款详情
对象
Alumni, Faculty and Staff, PG Students, UG Students
语言
英语
其他活动
1月6日
研讨会, 演讲, 讲座
IAS / School of Science Joint Lecture - Innovations in Organo Rare-Earth and Titanium Chemistry: From Self-Healing Polymers to N2 Activation
Abstract In this lecture, the speaker will introduce their recent studies on the development of innovative organometallic complexes and catalysts aimed at realizing unprecedented chemical trans...
12月5日
研讨会, 演讲, 讲座
IAS / School of Science Joint Lecture - Human B Cell Receptor-Epitope Selection for Pan-Sarbecovirus Neutralization
Abstract The induction of broadly neutralizing antibodies (bnAbs) against viruses requires the specific activation of human B cell receptors (BCRs) by viral epitopes. Following BCR activation, ...