Recently, pre-trained language models based on the Transformer structure like BERT and RoBERTa have achieved remarkable results on various natural language processing tasks and even some computer vision tasks. However, these models have many parameters, hindering their deployment on edge devices with limited storage. In this talk, I will first introduce some basics about pre-trained language modeling and our proposed pre-trained language model NEZHA. Then I will elaborate on how we alleviate the concerns in various deployment scenarios during the inference and training period. Specifically, compression and acceleration methods using knowledge distillation, dynamic networks, and network quantization will be discussed. Finally, I will also discuss some recent progress about training deep networks on edge through quantization.

28
October 2020
3pm - 4:20pm
Where
https://hkust.zoom.us/j/98248767613 (Passcode: math6380p)
Organizer(S)
Department of Mathematics
Contact/Enquiries
Payment Details
Audience
Alumni, Faculty and staff, PG students, UG students
语言
English
Other Events
Seminar, Lecture, Talk
MATH - Seminar on Data Science - Theory of Deep Convolutional Neural Networks
Deep learning has been widely applied and brought breakthroughs in speech recognition, computer vision, and many other domains. The involved deep neural network architectures and computational issues ...
18
Nov 2020
Seminar, Lecture, Talk
MATH - Seminar on Applied Mathematics - Thirty Years of Applied Mathematics
The 50's to the 80's saw tremendous growth of applied math, driven mainly by PDEs and numerical algorithms. The integration of the two produced the "Courant School", which has had a far-reaching impac...
11
Nov 2020