Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users. However, an adversary may still be able to infer the private training data by attacking the released model. Differential privacy (DP) provides a statistical guarantee against such attacks, at a privacy of possibly degenerating the accuracy or utility of the trained models. In this paper, we apply a utility enhancement scheme based on Laplacian smoothing for differentially-private federated learning (DP-Fed-LS), where the parameter aggregation with injected Gaussian noise is improved in statistical precision. We provide tight closed-form privacy bounds for both uniform and Poisson subsampling and derive corresponding DP guarantees for differential private federated learning, with or without Laplacian smoothing. Experiments over MNIST, SVHN and Shakespeare datasets show that the proposed method can improve model accuracy with DP-guarantee under both subsampling mechanisms.
14 May 2020
11:00am - 12:00pm
Where
https://hkust.zoom.us/j/91364836963
Speakers/Performers
Mr. Zhicong LIANG
HKUST
Organizer(S)
Department of Mathematics
Contact/Enquiries
mathseminar@ust.hk
Payment Details
Audience
Alumni, Faculty and Staff, PG Students, UG Students
Language(s)
English
Other Events
20 Jan 2026
Seminar, Lecture, Talk
IAS / School of Science Joint Lecture - A Journey to Defect Science and Engineering
Abstract A defect in a material is one of the most important concerns when it comes to modifying and tuning the properties and phenomena of materials. The speaker will review his study of defec...
6 Jan 2026
Seminar, Lecture, Talk
IAS / School of Science Joint Lecture - Innovations in Organo Rare-Earth and Titanium Chemistry: From Self-Healing Polymers to N2 Activation
Abstract In this lecture, the speaker will introduce their recent studies on the development of innovative organometallic complexes and catalysts aimed at realizing unprecedented chemical trans...