Policy gradient (PG) methods and their variants lie at the heart of modern reinforcement learning. Due to the intrinsic non-concavity of value maximization, however, the theoretical underpinnings of PG-type methods have been limited even until recently. In this talk, we discuss both the ineffectiveness and effectiveness of nonconvex policy optimization. On the one hand, we demonstrate that the popular softmax policy gradient method can take exponential time to converge. On the other hand, we show that employing natural policy gradients and enforcing entropy regularization allows for fast global convergence. 

10月17日
11:00am - 12:00pm
地點
https://hkust.zoom.us/j/94883840530 (Passcode: hkust)
講者/表演者
Prof. Yuting WEI
The Wharton School, University of Pennsyvania
主辦單位
Department of Mathematics
聯絡方法
付款詳情
對象
Alumni, Faculty and staff, PG students, UG students
語言
英語
其他活動
10月10日
研討會, 演講, 講座
IAS / School of Science Joint Lecture - Use of Large Animal Models to Investigate Brain Diseases
Abstract Genetically modified animal models have been extensively used to investigate the pathogenesis of age-dependent neurodegenerative diseases, such as Alzheimer (AD), Parkinson (PD), Hunti...
7月14日
研討會, 演講, 講座
IAS / School of Science Joint Lecture - Boron Clusters
Abstract The study of carbon clusters led to the discoveries of fullerenes, carbon nanotubes, and graphene. Are there other elements that can form similar nanostructures? To answer this questio...