Generative Adversarial Networks Based on a General Parameterized Family of Generator Loss Functions

Loading...
Thumbnail Image

Date

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

This thesis introduces a unifying parameterized generator loss function for generative adversarial networks (GANs). We establish an equilibrium theorem for our resulting GAN system under a canonical discriminator in terms of the so-called Jensen-$f$-divergence, a natural generalization of the Jensen-Shannon divergence to the $f$-divergence. We also show that our result recovers as special cases several GANs from the literature, including the original GAN, least square GAN (LSGAN), $\alpha$-GAN and others. Finally, we systematically conduct experiments on three image datasets for different manifestations of our GAN system to illustrate their performance and stability.

Description

Keywords

Generative adversarial networks, Deep learning, Parameterized loss functions, f-divergence, Jensen-f-divergence

Citation

Endorsement

Review

Supplemented By

Referenced By