Our understanding of GAN (generative adversarial net) training is still very limited since it is a non-convex-non-concave min-max optimization. As a result, most recent studies focused on local analysis. In this talk, we discuss how to perform a global analysis of GANs. We prove that the original JS-GAN has exponentially many bad strict local minima which are perceived as mode-collapse. We show that a 2-line modification to JS-GAN called relativistic standard GAN (RS-GAN) eliminates all bad basins. We extend the two results to a large class of losses as well: for separable GANs (including JS-GAN, hinge-GAN, LS-GAN) exponentially many bad basins exist, while for R-GANs no bad basins exist. The effectiveness of R-GANs were verified by a few empirical works before (e.g. ESR-GAN in super resolution). Based on theory, we predict that R-GANs has a bigger advantage for narrower neural nets, and our experiments verify that R-GANs (e.g. RS-GAN) can beat their separable counter-parts (e.g. JS-GAN) by 5-10 FID scores in narrower nets. Our theory also implies that the advantage is larger for higher-dimensional images; we show that for high-resolution images like LSUN, while JS-GAN only generates noise, RS-GAN can generate quite good images.
August 2020
10:30am - 11:30am
Department of Mathematics
Payment Details
Alumni, Faculty and Staff, PG Students, UG Students
Other Events
Seminar, Lecture, Talk
Department of Chemistry Seminar - Deep Learning in Protein Folding: Trajectory Reconstruction from Experimental Data and Ultra-fast Latent Space Simulators
Speaker: Professor Andrew FERGUSON Institution: Pritzker School of Molecular Engineering, University of Chicago, USA Hosted by: Professor Xuhui HUANG Zoom Link: https://hkust.zoom.us/j/921429...
Jun 2021
Seminar, Lecture, Talk
Seminar of Physics Department - Deep Learning Enhanced Quantum Information Experiments in Semiconductors
May 2021