A recent breakthrough in deep learning theory shows that the training of over-parameterized deep neural networks (DNNs) can be characterized by the neural tangent kernel (NTK). However, existing optimization and generalization guarantees for deep neural networks (DNNs) typically require a network width larger than a high degree polynomial of the training sample size $n$ and the inverse of the target accuracy $epsilon^{-1}$. In this talk, I will discuss how this over-parameterization condition can be improved to more practical settings. Specifically, I will first explain why over-parameterized DNNs can be optimized to zero training error in the NTK regime, and then show what kind of functions can be learnt by DNNs with small test errors. Under standard NTK-type assumptions, these optimization and generalization guarantees hold with network width polylogarithmic in $n$ and $epsilon^{-1}$.
4月30日
3pm - 4pm
地點
https://hkust.zoom.com.cn/j/5616960008
講者/表演者
Dr. Yuan CAO
University of California at Los Angeles
主辦單位
Department of Mathematics
聯絡方法
mathseminar@ust.hk
付款詳情
對象
Alumni, Faculty and Staff, PG Students, UG Students
語言
英語
其他活動
5月24日
研討會, 演講, 講座
IAS / School of Science Joint Lecture - Confinement Controlled Electrochemistry: Nanopore beyond Sequencing
Abstract Nanopore electrochemistry refers to the promising measurement science based on elaborate pore structures, which offers a well-defined geometric confined space to adopt and characterize sin...
5月9日
研討會, 演講, 講座
IAS / School of Science Joint Lecture – Deconstructive Homologation of Ethers and Amides
Abstract Preparation of diverse homologs from lead compounds has been a common and important practice in medicinal chemistry. However, homologation of many functional groups, such as ethers an...