A recent breakthrough in deep learning theory shows that the training of over-parameterized deep neural networks (DNNs) can be characterized by the neural tangent kernel (NTK). However, existing optimization and generalization guarantees for deep neural networks (DNNs) typically require a network width larger than a high degree polynomial of the training sample size $n$ and the inverse of the target accuracy $epsilon^{-1}$. In this talk, I will discuss how this over-parameterization condition can be improved to more practical settings. Specifically, I will first explain why over-parameterized DNNs can be optimized to zero training error in the NTK regime, and then show what kind of functions can be learnt by DNNs with small test errors. Under standard NTK-type assumptions, these optimization and generalization guarantees hold with network width polylogarithmic in $n$ and $epsilon^{-1}$.
4月30日
3:00pm - 4:00pm
地點
https://hkust.zoom.com.cn/j/5616960008
講者/表演者
Dr. Yuan CAO
University of California at Los Angeles
主辦單位
Department of Mathematics
聯絡方法
mathseminar@ust.hk
付款詳情
對象
Alumni, Faculty and Staff, PG Students, UG Students
語言
英語
其他活動
1月6日
研討會, 演講, 講座
IAS / School of Science Joint Lecture - Innovations in Organo Rare-Earth and Titanium Chemistry: From Self-Healing Polymers to N2 Activation
Abstract In this lecture, the speaker will introduce their recent studies on the development of innovative organometallic complexes and catalysts aimed at realizing unprecedented chemical trans...
12月5日
研討會, 演講, 講座
IAS / School of Science Joint Lecture - Human B Cell Receptor-Epitope Selection for Pan-Sarbecovirus Neutralization
Abstract The induction of broadly neutralizing antibodies (bnAbs) against viruses requires the specific activation of human B cell receptors (BCRs) by viral epitopes. Following BCR activation, ...