Recently, an intriguing phenomenon in the final stages of network training has been discovered and caught great interest, in which the last-layer features and classifiers collapse to simple but elegant mathematical structures: all training inputs are mapped to class-specific points in feature space, and the last-layer classifier converges to the dual of the features' class means while attaining the maximum possible margin. This phenomenon, dubbed Neural Collapse, persists across a variety of different network architectures, datasets, and even data domains. Moreover, a progressive neural collapse occurs from shallow to deep layers. This talk leverages the symmetry and geometry of Neural Collapse, and develops a rigorous mathematical theory to explain when and why it happens under the so-called unconstrained feature model. Based upon this, we show how it can be used to provide guidelines to understand and improve transferability with more efficient fine-tuning.

27 Apr 2023
10:00am - 11:00am
Where
Room 4475 (Lifts 25/26)
Speakers/Performers
Prof. Qing QU
University of Michigan
Organizer(S)
Department of Mathematics
Contact/Enquiries
Payment Details
Audience
Alumni, Faculty and staff, PG students, UG students
Language(s)
English
Other Events
6 Jan 2026
Seminar, Lecture, Talk
IAS / School of Science Joint Lecture - Innovations in Organo Rare-Earth and Titanium Chemistry: From Self-Healing Polymers to N2 Activation
Abstract In this lecture, the speaker will introduce their recent studies on the development of innovative organometallic complexes and catalysts aimed at realizing unprecedented chem...
5 Dec 2025
Seminar, Lecture, Talk
IAS / School of Science Joint Lecture - Human B Cell Receptor-Epitope Selection for Pan-Sarbecovirus Neutralization
Abstract The induction of broadly neutralizing antibodies (bnAbs) against viruses requires the specific activation of human B cell receptors (BCRs) by viral epitopes. Following BCR activation, ...