Recently, pre-trained language models based on the Transformer structure like BERT and RoBERTa have achieved remarkable results on various natural language processing tasks and even some computer vision tasks. However, these models have many parameters, hindering their deployment on edge devices with limited storage. In this talk, I will first introduce some basics about pre-trained language modeling and our proposed pre-trained language model NEZHA. Then I will elaborate on how we alleviate the concerns in various deployment scenarios during the inference and training period. Specifically, compression and acceleration methods using knowledge distillation, dynamic networks, and network quantization will be discussed. Finally, I will also discuss some recent progress about training deep networks on edge through quantization.

October 2020
3pm - 4:20pm
Where (Passcode: math6380p)
Department of Mathematics
Payment Details
Alumni, Faculty and staff, PG students, UG students
Other Events
Seminar, Lecture, Talk
MATH - Seminar on Data Science - Theory of Deep Convolutional Neural Networks
Deep learning has been widely applied and brought breakthroughs in speech recognition, computer vision, and many other domains. The involved deep neural network architectures and computational issues ...
Nov 2020
Seminar, Lecture, Talk
MATH - Seminar on Applied Mathematics - Thirty Years of Applied Mathematics
The 50's to the 80's saw tremendous growth of applied math, driven mainly by PDEs and numerical algorithms. The integration of the two produced the "Courant School", which has had a far-reaching impac...
Nov 2020