10月28日
研討會, 演講, 講座
MATH - Seminar on Data Science - Compression and Acceleration of Pre-trained Language Models
Recently, pre-trained language models based on the Transformer structure like BERT and RoBERTa have achieved remarkable results on various natural language processing tasks and even some computer vision tasks.