diff --git a/README.md b/README.md index c8db0cb..a5f0a7d 100644 --- a/README.md +++ b/README.md @@ -63,10 +63,10 @@ $ python3.7 acgan.py ```

- +

-### Adversarial Autoencoder +### Adversarial Autoencoder [zwy] _Adversarial Autoencoder_ #### Authors @@ -84,10 +84,10 @@ $ python3.7 aae.py ```

- +

-### BEGAN +### BEGAN [zwy] _BEGAN: Boundary Equilibrium Generative Adversarial Networks_ #### Authors @@ -105,10 +105,10 @@ $ python3.7 began.py ```

- +

-### BicycleGAN +### BicycleGAN [zwy] _Toward Multimodal Image-to-Image Translation_ #### Authors @@ -123,6 +123,10 @@ Many image-to-image translation problems are ambiguous, as a single input image

+

+ +

+ #### Run Example ``` $ cd data/ @@ -132,7 +136,7 @@ $ python3.7 bicyclegan.py ```

- +

Various style translations by varying the latent code. @@ -185,11 +189,11 @@ $ python3.7 clustergan.py ```

- +

-### Conditional GAN +### Conditional GAN [zwy] _Conditional Generative Adversarial Nets_ #### Authors @@ -207,7 +211,7 @@ $ python3.7 cgan.py ```

- +

### Context-Conditional GAN @@ -252,7 +256,7 @@ $ python3.7 context_encoder.py Rows: Masked | Inpainted | Original | Masked | Inpainted | Original

-### Coupled GAN +### Coupled GAN [zwy] _Coupled Generative Adversarial Networks_ #### Authors @@ -270,13 +274,13 @@ $ python3.7 cogan.py ```

- +

Generated MNIST and MNIST-M images

-### CycleGAN +### CycleGAN [zwy] _Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks_ #### Authors @@ -327,7 +331,7 @@ $ python3.7 dcgan.py

-### DiscoGAN +### DiscoGAN [zwy] _Learning to Discover Cross-Domain Relations with Generative Adversarial Networks_ #### Authors @@ -351,7 +355,7 @@ $ python3.7 discogan.py --dataset_name edges2shoes ```

- +

Rows from top to bottom: (1) Real image from domain A (2) Translated image from
@@ -376,7 +380,7 @@ $ cd models/dragan/ $ python3.7 dragan.py ``` -### DualGAN +### DualGAN [zwy] _DualGAN: Unsupervised Dual Learning for Image-to-Image Translation_ #### Authors @@ -396,6 +400,10 @@ $ cd ../models/dualgan/ $ python3.7 dualgan.py --dataset_name facades ``` +

+ +

+ ### Energy-Based GAN _Energy-based Generative Adversarial Network_ @@ -413,7 +421,7 @@ $ cd models/ebgan/ $ python3.7 ebgan.py ``` -### Enhanced Super-Resolution GAN +### Enhanced Super-Resolution GAN [zwy] _ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks_ #### Authors @@ -432,6 +440,10 @@ $ cd models/esrgan/ $ python3.7 esrgan.py ``` +

+ +

+

@@ -478,14 +490,14 @@ $ python3.7 infogan.py ```

- +

Result of varying categorical latent variable by column.

- +

Result of varying continuous latent variable by row. @@ -529,7 +541,7 @@ $ python3.7 munit.py --dataset_name edges2shoes ```

- +

Results by varying the style code. @@ -559,14 +571,14 @@ $ python3.7 pix2pix.py --dataset_name facades ```

- +

Rows from top to bottom: (1) The condition for the generator (2) Generated image
based of condition (3) The true corresponding image to the condition

-### PixelDA +### PixelDA [zwy] _Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks_ #### Authors @@ -590,7 +602,7 @@ $ python3.7 pixelda.py | PixelDA | 95% |

- +

Rows from top to bottom: (1) Real images from MNIST (2) Translated images from
@@ -618,12 +630,12 @@ $ python3.7 relativistic_gan.py # Relativistic Standard GAN $ python3.7 relativistic_gan.py --rel_avg_gan # Relativistic Average GAN ``` -### Semi-Supervised GAN +### Semi-Supervised GAN [zwy] _Semi-Supervised Generative Adversarial Network_ #### Authors Augustus Odena - +2 #### Abstract We extend Generative Adversarial Networks (GANs) to the semi-supervised context by forcing the discriminator network to output class labels. We train a generative model G and a discriminator D on a dataset with inputs belonging to one of N classes. At training time, D is made to predict which of N+1 classes the input belongs to, where an extra class is added to correspond to the outputs of G. We show that this method can be used to create a more data-efficient classifier and that it allows for generating higher quality samples than a regular GAN. @@ -635,6 +647,10 @@ $ cd models/sgan/ $ python3.7 sgan.py ``` +

+ +

+ ### Softmax GAN _Softmax GAN_ @@ -686,7 +702,7 @@ Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, #### Abstract Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method. -[[Paper]](https://arxiv.org/abs/1609.04802) [[Code]](models/srgan/srgan.py) +[[Paper]](https://arxiv.org/abs/1609.02002) [[Code]](models/srgan/srgan.py)

@@ -763,7 +779,7 @@ $ python3.7 wgan_gp.py

-### Wasserstein GAN DIV +### Wasserstein GAN DIV [zwy] _Wasserstein Divergence for GANs_ #### Authors @@ -782,5 +798,5 @@ $ python3.7 wgan_div.py ```

- +

\ No newline at end of file diff --git a/assets/bicyclegan.gif b/assets/bicyclegan.gif new file mode 100644 index 0000000..11f39e6 Binary files /dev/null and b/assets/bicyclegan.gif differ diff --git a/assets/cgan.gif b/assets/cgan.gif new file mode 100644 index 0000000..312172a Binary files /dev/null and b/assets/cgan.gif differ diff --git a/assets/cogan.gif b/assets/cogan.gif new file mode 100644 index 0000000..d0eccbb Binary files /dev/null and b/assets/cogan.gif differ diff --git a/assets/discogan.gif b/assets/discogan.gif new file mode 100644 index 0000000..a742662 Binary files /dev/null and b/assets/discogan.gif differ diff --git a/assets/dualgan.gif b/assets/dualgan.gif new file mode 100644 index 0000000..ca086bb Binary files /dev/null and b/assets/dualgan.gif differ diff --git a/assets/esrgan.gif b/assets/esrgan.gif new file mode 100644 index 0000000..879c582 Binary files /dev/null and b/assets/esrgan.gif differ diff --git a/assets/pixelda.gif b/assets/pixelda.gif new file mode 100644 index 0000000..8cac8b5 Binary files /dev/null and b/assets/pixelda.gif differ diff --git a/assets/sgan.gif b/assets/sgan.gif new file mode 100644 index 0000000..e42a4f5 Binary files /dev/null and b/assets/sgan.gif differ diff --git a/assets/wgan_div.gif b/assets/wgan_div.gif new file mode 100644 index 0000000..130932a Binary files /dev/null and b/assets/wgan_div.gif differ