Recently, an intriguing phenomenon in the final stages of network training has been discovered and caught great interest, in which the last-layer features and classifiers collapse to simple but elegant mathematical structures: all training inputs are mapped to class-specific points in feature space, and the last-layer classifier converges to the dual of the features' class means while attaining the maximum possible margin. This phenomenon, dubbed Neural Collapse, persists across a variety of different network architectures, datasets, and even data domains. Moreover, a progressive neural collapse occurs from shallow to deep layers. This talk leverages the symmetry and geometry of Neural Collapse, and develops a rigorous mathematical theory to explain when and why it happens under the so-called unconstrained feature model. Based upon this, we show how it can be used to provide guidelines to understand and improve transferability with more efficient fine-tuning.

4月27日
10:00am - 11:00am
地點
Room 4475 (Lifts 25/26)
講者/表演者
Prof. Qing QU
University of Michigan
主辦單位
Department of Mathematics
聯絡方法
付款詳情
對象
Alumni, Faculty and staff, PG students, UG students
語言
英語
其他活動
5月15日
研討會, 演講, 講座
IAS / School of Science Joint Lecture - Laser Spectroscopy of Computable Atoms and Molecules with Unprecedented Accuracy
Abstract Precision spectroscopy of the hydrogen atom, a fundamental two-body system, has been instrumental in shaping quantum mechanics. Today, advances in theory and experiment allow us to ext...
3月24日
研討會, 演講, 講座
IAS / School of Science Joint Lecture - Pushing the Limit of Nonlinear Vibrational Spectroscopy for Molecular Surfaces/Interfaces Studies
Abstract Surfaces and interfaces are ubiquitous in Nature. Sum-frequency generation vibrational spectroscopy (SFG-VS) is a powerful surface/interface selective and sub-monolayer sensitive spect...