Deep neural networks can predict well even when fitting noisy data. The phenomenon is called benign overfitting. In this seminar, we analyze the overparametrized model under the adversarial perturbation, showing the fitting noise leads to sensitive models to the adversarial perturbation. In contrast to the natural risk where noise cancels out for each dimension, the small perturbation of each feature accumulates to significant change of the output in the adversarial attack.  And we also study the adversarial training in these overparametrized models, showing that while it can increase the robustness of the model, it leads to distinct parameter to the oracle and decreases in performance for natural data.

5月6日
10:00am - 11:00am
地點
https://hkust.zoom.us/j/92129409608 (Passcode: 568117)
講者/表演者
Mr. Zhichao HUANG
主辦單位
Department of Mathematics
聯絡方法
付款詳情
對象
Alumni, Faculty and staff, PG students, UG students
語言
英語
其他活動
10月10日
研討會, 演講, 講座
IAS / School of Science Joint Lecture - Use of Large Animal Models to Investigate Brain Diseases
Abstract Genetically modified animal models have been extensively used to investigate the pathogenesis of age-dependent neurodegenerative diseases, such as Alzheimer (AD), Parkinson (PD), Hunti...
7月14日
研討會, 演講, 講座
IAS / School of Science Joint Lecture - Boron Clusters
Abstract The study of carbon clusters led to the discoveries of fullerenes, carbon nanotubes, and graphene. Are there other elements that can form similar nanostructures? To answer this questio...