Recently, pre-trained language models based on the Transformer structure like BERT and RoBERTa have achieved remarkable results on various natural language processing tasks and even some computer vision tasks. However, these models have many parameters, hindering their deployment on edge devices with limited storage. In this talk, I will first introduce some basics about pre-trained language modeling and our proposed pre-trained language model NEZHA. Then I will elaborate on how we alleviate the concerns in various deployment scenarios during the inference and training period. Specifically, compression and acceleration methods using knowledge distillation, dynamic networks, and network quantization will be discussed. Finally, I will also discuss some recent progress about training deep networks on edge through quantization.

28
October 2020
3pm - 4:20pm
Where
https://hkust.zoom.us/j/98248767613 (Passcode: math6380p)
Organizer(S)
Department of Mathematics
Contact/Enquiries
Payment Details
Audience
Alumni, Faculty and staff, PG students, UG students
Language(s)
English
Other Events
Seminar, Lecture, Talk
Department of Chemistry Seminar - Deep Learning in Protein Folding: Trajectory Reconstruction from Experimental Data and Ultra-fast Latent Space Simulators
Speaker: Professor Andrew FERGUSON Institution: Pritzker School of Molecular Engineering, University of Chicago, USA Hosted by: Professor Xuhui HUANG Zoom Link: https://hkust.zoom.us/j/921429...
03
Jun 2021
Seminar, Lecture, Talk
Seminar of Physics Department - Deep Learning Enhanced Quantum Information Experiments in Semiconductors
18
May 2021