Recently, pre-trained language models based on the Transformer structure like BERT and RoBERTa have achieved remarkable results on various natural language processing tasks and even some computer vision tasks. However, these models have many parameters, hindering their deployment on edge devices with limited storage. In this talk, I will first introduce some basics about pre-trained language modeling and our proposed pre-trained language model NEZHA. Then I will elaborate on how we alleviate the concerns in various deployment scenarios during the inference and training period. Specifically, compression and acceleration methods using knowledge distillation, dynamic networks, and network quantization will be discussed. Finally, I will also discuss some recent progress about training deep networks on edge through quantization.

10月28日
3pm - 4:20pm
地点
https://hkust.zoom.us/j/98248767613 (Passcode: math6380p)
讲者/表演者
Dr. Lu HOU
Huawei Noah’s Ark Lab
主办单位
Department of Mathematics
联系方法
付款详情
对象
Alumni, Faculty and staff, PG students, UG students
语言
英语
其他活动
4月26日
研讨会, 演讲, 讲座
IAS / School of Science Joint Lecture - Molecular Basis of Wnt Biogenesis, Secretion and Ligand Specific Signaling
Abstract Wnt signaling is essential to regulate embryonic development and adult tissue homeostasis. Aberrant Wnt signaling is associated with cancers. The ER-resident membrane-bound O-acyltransfera...
4月18日
研讨会, 演讲, 讲座
IAS / School of Science Joint Lecture - Understanding the Roles of Transposable Elements in the Human Genome
Abstract Transposable elements (TEs) have expanded the binding repertoire of many transcription factors and, through this process, have been co-opted in different transcriptional networks. In this ...