Different GAN (Generative Adversarial Network) architectures in TensorFlow
https://arxiv.org/abs/1701.07875
tf.layers.conv2d_transpose tf.contrib.layers.batch_norm
tf.layers.conv2d tf.contrib.layers.batch_norm
def leaky_relu(input, name, leak=0.2): return tf.maximum(input, leak * input, name=name)
w- gan
A Generative Adversarial Net implemented with TensorFlow using the MNIST data set.
- Input: 100
- Output: 784
- Purpose: Will learn to output images that look like a real image from random input.
- Input: 784
- Output: 1
- Purpose: Will learn to tell a real ("looks like it could be a real image in MNIST dataset") image(784) from a fake one.
A problem with the way that I built this is that I used the same architecture for both the generator and discriminator. Although I thought this save me, the developer, a lot of time it actually caused a lot of problems with trying to pigeonhole that architecture to work with a smaller input (Discriminator: 28x28 vs 10x10 : Generator).
- conv1 -> relu -> pool ->
- conv2 -> relu -> pool ->
- conv3 -> relu -> pool ->
- fullyConnected1 -> relu ->
- fullyConnected2 -> relu ->
- fullyConnected3 ->
100 random numbers -> Generator -> ImageOutput -> Discriminator -> (Real|Fake)