The theory and practice of stochastic optimization has focused on stochastic gradient descent (SGD) in recent years, retaining the basic first-order stochastic nature of SGD while aiming to improve it via mechanisms such as averaging, momentum, and variance reduction. Improvement can be measured along various dimensions, however, and it has proved difficult to achieve improvements both in terms of nonasymptotic measures of convergence rate and asymptotic measures of distributional tightness. In this work, we consider first-order stochastic optimization from a general statistical point of view, motivating a specific form of recursive averaging of past stochastic gradients. The resulting algorithm, which we refer to as Recursive One-Over-T SGD (ROOT-SGD), matches the state-of-the-art convergence rate among online variance-reduced stochastic approximation methods. Moreover, under slightly stronger distributional assumptions, the rescaled last-iterate of ROOT-SGD converges to a zero-mean Gaussian distribution that achieves near-optimal covariance. This is a joint work with Wenlong Mou, Martin Wainwright, and Michael Jordan.
28
August 2020
11am - 12pm
Where
https://hkust.zoom.us/j/5616960008
Organizer(S)
Department of Mathematics
Contact/Enquiries
mathseminar@ust.hk
Payment Details
Audience
Alumni, Faculty and Staff, PG Students, UG Students
Language
English
Other Events
Seminar, Lecture, Talk
MATH - Seminar on Data Science - Theory of Deep Convolutional Neural Networks
Deep learning has been widely applied and brought breakthroughs in speech recognition, computer vision, and many other domains. The involved deep neural network architectures and computational issues ...
18
Nov 2020
Seminar, Lecture, Talk
MATH - Seminar on Applied Mathematics - Thirty Years of Applied Mathematics
The 50's to the 80's saw tremendous growth of applied math, driven mainly by PDEs and numerical algorithms. The integration of the two produced the "Courant School", which has had a far-reaching impac...
11
Nov 2020