Recently, pre-trained language models based on the Transformer structure like BERT and RoBERTa have achieved remarkable results on various natural language processing tasks and even some computer vision tasks. However, these models have many parameters, hindering their deployment on edge devices with limited storage. In this talk, I will first introduce some basics about pre-trained language modeling and our proposed pre-trained language model NEZHA. Then I will elaborate on how we alleviate the concerns in various deployment scenarios during the inference and training period. Specifically, compression and acceleration methods using knowledge distillation, dynamic networks, and network quantization will be discussed. Finally, I will also discuss some recent progress about training deep networks on edge through quantization.

28 Oct 2020
3:00pm - 4:20pm
Where
https://hkust.zoom.us/j/98248767613 (Passcode: math6380p)
Speakers/Performers
Dr. Lu HOU
Huawei Noah’s Ark Lab
Organizer(S)
Department of Mathematics
Contact/Enquiries
Payment Details
Audience
Alumni, Faculty and staff, PG students, UG students
Language(s)
English
Other Events
20 Jan 2026
Seminar, Lecture, Talk
IAS / School of Science Joint Lecture - A Journey to Defect Science and Engineering
Abstract A defect in a material is one of the most important concerns when it comes to modifying and tuning the properties and phenomena of materials. The speaker will review his study of defec...
6 Jan 2026
Seminar, Lecture, Talk
IAS / School of Science Joint Lecture - Innovations in Organo Rare-Earth and Titanium Chemistry: From Self-Healing Polymers to N2 Activation
Abstract In this lecture, the speaker will introduce their recent studies on the development of innovative organometallic complexes and catalysts aimed at realizing unprecedented chemical trans...