Current neural networks can be easily attacked by small artificially chosen noise called adversarial examples. Although adversarial training and its variants currently constitute the most effective way to achieve robustness to adversarial attacks, their poor generalization limits their performance on the test samples. In this seminar, I will talk about a method to improve the generalization and robust accuracy of adversarially-trained networks via self-supervised test-time fine-tuning. To this end, I introduce a meta adversarial training method that incorporates the test-time fine-tuning procedure into the training phase, so as to strengthen the correlation between the self-supervised and classification tasks, which yields a good starting point for test-time fine-tuning. The extensive experiments on CIFAR10 and STL10 using different self-supervised tasks show that the method consistently improves the robust accuracy under different attack strategies for both the white-box and black-box attacks.

4月29日
9:30am - 10:30am
地點
https://hkust.zoom.us/j/93415784918 (Passcode: 343324)
講者/表演者
Mr. Zhichao HUANG
主辦單位
Department of Mathematics
聯絡方法
付款詳情
對象
Alumni, Faculty and staff, PG students, UG students
語言
英語
其他活動
12月5日
研討會, 演講, 講座
IAS / School of Science Joint Lecture - Human B Cell Receptor-Epitope Selection for Pan-Sarbecovirus Neutralization
Abstract The induction of broadly neutralizing antibodies (bnAbs) against viruses requires the specific activation of human B cell receptors (BCRs) by viral epitopes. Following BCR activation, ...
10月10日
研討會, 演講, 講座
IAS / School of Science Joint Lecture - Use of Large Animal Models to Investigate Brain Diseases
Abstract Genetically modified animal models have been extensively used to investigate the pathogenesis of age-dependent neurodegenerative diseases, such as Alzheimer (AD), Parkinson (PD), Hunti...