Our understanding of GAN (generative adversarial net) training is still very limited since it is a non-convex-non-concave min-max optimization. As a result, most recent studies focused on local analysis. In this talk, we discuss how to perform a global analysis of GANs. We prove that the original JS-GAN has exponentially many bad strict local minima which are perceived as mode-collapse. We show that a 2-line modification to JS-GAN called relativistic standard GAN (RS-GAN) eliminates all bad basins. We extend the two results to a large class of losses as well: for separable GANs (including JS-GAN, hinge-GAN, LS-GAN) exponentially many bad basins exist, while for R-GANs no bad basins exist. The effectiveness of R-GANs were verified by a few empirical works before (e.g. ESR-GAN in super resolution). Based on theory, we predict that R-GANs has a bigger advantage for narrower neural nets, and our experiments verify that R-GANs (e.g. RS-GAN) can beat their separable counter-parts (e.g. JS-GAN) by 5-10 FID scores in narrower nets. Our theory also implies that the advantage is larger for higher-dimensional images; we show that for high-resolution images like LSUN, while JS-GAN only generates noise, RS-GAN can generate quite good images.
21
August 2020
10:30am - 11:30am
Where
https://hkust.zoom.us/j/5616960008
Organizer(S)
Department of Mathematics
Contact/Enquiries
mathseminar@ust.hk
Payment Details
Audience
Alumni, Faculty and Staff, PG Students, UG Students
语言
English
Other Events
Seminar, Lecture, Talk
PHYS Seminar - Ultracold Trapped Ions and Molecules for Quantum Control
25
Jan 2021
Seminar, Lecture, Talk
PHYS Seminar - Electron Correlations in Topological Quantum Crystals for Future Applications
19
Jan 2021