In this seminar, we will discuss private federated learning. We will firstly provide new optimization error bounds for differential private federated learning with Laplacian Smoothing (DP-Fed-LS) and heterogeneous data. The error bounds help us better understand the influence of errors introduced by differential privacy, heterogeneity of data and variance of stochastic gradient descent over the convergence of DP-Fed-LS. For another, we will also explore how to push the limit of private federated learning by improving current gradient attack. Experiment shows that our proposed new attack can recover training data with high quality while the targeted model is untrained and when the batch size is small. Attacks on more realistic settings are to be discussed.

4月29日
10:15am - 11:15am
地点
https://hkust.zoom.us/j/99997376210 (Passcode: 214192)
讲者/表演者
Mr. Zhicong LIANG
主办单位
Department of Mathematics
联系方法
付款详情
对象
Alumni, Faculty and staff, PG students, UG students
语言
英语
其他活动
10月10日
研讨会, 演讲, 讲座
IAS / School of Science Joint Lecture - Use of Large Animal Models to Investigate Brain Diseases
Abstract Genetically modified animal models have been extensively used to investigate the pathogenesis of age-dependent neurodegenerative diseases, such as Alzheimer (AD), Parkinson (PD), Hunti...
7月14日
研讨会, 演讲, 讲座
IAS / School of Science Joint Lecture - Boron Clusters
Abstract The study of carbon clusters led to the discoveries of fullerenes, carbon nanotubes, and graphene. Are there other elements that can form similar nanostructures? To answer this questio...