We consider the general $f$-divergence formulation of bidirectional generative modeling, which includes VAE and BiGAN as special cases. We present a new optimization method for this formulation, where the gradient is computed using an adversarially learned discriminator. In our framework, we show that different divergences induce similar algorithms in terms of gradient evaluation, except with different scaling. Therefore this paper gives a general recipe for a class of principled $f$-divergence based generative modeling methods. Theoretical justifications and extensive empirical studies are provided to demonstrate the advantage of our approach over existing methods.
5月15日
10:30am - 11:30am
地点
https://hkust.zoom.com.cn/j/96396217133
讲者/表演者
Ms. Xinwei SHEN
HKUST
主办单位
Department of Mathematics
联系方法
mathseminar@ust.hk
付款详情
对象
Alumni, Faculty and Staff, PG Students, UG Students
语言
英语
其他活动
12月5日
研讨会, 演讲, 讲座
IAS / School of Science Joint Lecture - Human B Cell Receptor-Epitope Selection for Pan-Sarbecovirus Neutralization
Abstract The induction of broadly neutralizing antibodies (bnAbs) against viruses requires the specific activation of human B cell receptors (BCRs) by viral epitopes. Following BCR activation, ...
10月10日
研讨会, 演讲, 讲座
IAS / School of Science Joint Lecture - Use of Large Animal Models to Investigate Brain Diseases
Abstract Genetically modified animal models have been extensively used to investigate the pathogenesis of age-dependent neurodegenerative diseases, such as Alzheimer (AD), Parkinson (PD), Hunti...