Current neural networks can be easily attacked by small artificially chosen noise called adversarial examples. Although adversarial training and its variants currently constitute the most effective way to achieve robustness to adversarial attacks, their poor generalization limits their performance on the test samples. In this seminar, I will talk about a method to improve the generalization and robust accuracy of adversarially-trained networks via self-supervised test-time fine-tuning. To this end, I introduce a meta adversarial training method that incorporates the test-time fine-tuning procedure into the training phase, so as to strengthen the correlation between the self-supervised and classification tasks, which yields a good starting point for test-time fine-tuning. The extensive experiments on CIFAR10 and STL10 using different self-supervised tasks show that the method consistently improves the robust accuracy under different attack strategies for both the white-box and black-box attacks.

4月29日
9:30am - 10:30am
地点
https://hkust.zoom.us/j/93415784918 (Passcode: 343324)
讲者/表演者
Mr. Zhichao HUANG
主办单位
Department of Mathematics
联系方法
付款详情
对象
Alumni, Faculty and staff, PG students, UG students
语言
英语
其他活动
5月15日
研讨会, 演讲, 讲座
IAS / School of Science Joint Lecture - Laser Spectroscopy of Computable Atoms and Molecules with Unprecedented Accuracy
Abstract Precision spectroscopy of the hydrogen atom, a fundamental two-body system, has been instrumental in shaping quantum mechanics. Today, advances in theory and experiment allow us to ext...
3月24日
研讨会, 演讲, 讲座
IAS / School of Science Joint Lecture - Pushing the Limit of Nonlinear Vibrational Spectroscopy for Molecular Surfaces/Interfaces Studies
Abstract Surfaces and interfaces are ubiquitous in Nature. Sum-frequency generation vibrational spectroscopy (SFG-VS) is a powerful surface/interface selective and sub-monolayer sensitive spect...