Recently, pre-trained language models based on the Transformer structure like BERT and RoBERTa have achieved remarkable results on various natural language processing tasks and even some computer vision tasks. However, these models have many parameters, hindering their deployment on edge devices with limited storage. In this talk, I will first introduce some basics about pre-trained language modeling and our proposed pre-trained language model NEZHA. Then I will elaborate on how we alleviate the concerns in various deployment scenarios during the inference and training period. Specifically, compression and acceleration methods using knowledge distillation, dynamic networks, and network quantization will be discussed. Finally, I will also discuss some recent progress about training deep networks on edge through quantization.

10月28日
3:00pm - 4:20pm
地點
https://hkust.zoom.us/j/98248767613 (Passcode: math6380p)
講者/表演者
Dr. Lu HOU
Huawei Noah’s Ark Lab
主辦單位
Department of Mathematics
聯絡方法
付款詳情
對象
Alumni, Faculty and staff, PG students, UG students
語言
英語
其他活動
11月22日
研討會, 演講, 講座
IAS / School of Science Joint Lecture - Leveraging Protein Dynamics Memory with Machine Learning to Advance Drug Design: From Antibiotics to Targeted Protein Degradation
Abstract Protein dynamics are fundamental to protein function and encode complex biomolecular mechanisms. Although Markov state models have made it possible to capture long-timescale protein co...
11月8日
研討會, 演講, 講座
IAS / School of Science Joint Lecture - Some Theorems in the Representation Theory of Classical Lie Groups
Abstract After introducing some basic notions in the representation theory of classical Lie groups, the speaker will explain three results in this theory: the multiplicity one theorem for classical...