Training GAN issue!

When I’m traing my gan in MINST datasets, I got the following result, it seems like the generator don’t work at all.

Epoch [0/200], Step[10/600], d_loss: 0.0002, g_loss: 13.0330, D(x): 1.00, D(G(z)): 0.00
Epoch [0/200], Step[20/600], d_loss: 0.0045, g_loss: 8.2261, D(x): 1.00, D(G(z)): 0.00
Epoch [0/200], Step[30/600], d_loss: 0.0036, g_loss: 27.5934, D(x): 1.00, D(G(z)): 0.00
Epoch [0/200], Step[40/600], d_loss: 0.2239, g_loss: 24.8275, D(x): 0.93, D(G(z)): 0.00
Epoch [0/200], Step[50/600], d_loss: 0.0012, g_loss: 19.9418, D(x): 1.00, D(G(z)): 0.00
Epoch [0/200], Step[60/600], d_loss: 0.0073, g_loss: 9.9254, D(x): 1.00, D(G(z)): 0.01
Epoch [0/200], Step[70/600], d_loss: 0.0033, g_loss: 8.8135, D(x): 1.00, D(G(z)): 0.00
Epoch [0/200], Step[80/600], d_loss: 0.0021, g_loss: 9.3930, D(x): 1.00, D(G(z)): 0.00
Epoch [0/200], Step[90/600], d_loss: 1.7211, g_loss: 13.6998, D(x): 1.00, D(G(z)): 0.72
Epoch [0/200], Step[100/600], d_loss: 0.2832, g_loss: 23.5422, D(x): 0.93, D(G(z)): 0.00
Epoch [0/200], Step[110/600], d_loss: 0.0000, g_loss: 26.4731, D(x): 1.00, D(G(z)): 0.00
Epoch [0/200], Step[120/600], d_loss: 0.3778, g_loss: 27.6310, D(x): 0.90, D(G(z)): 0.00
Epoch [0/200], Step[130/600], d_loss: 0.0000, g_loss: 27.6310, D(x): 1.00, D(G(z)): 0.00
Epoch [0/200], Step[140/600], d_loss: 0.0000, g_loss: 27.6310, D(x): 1.00, D(G(z)): 0.00
Epoch [0/200], Step[150/600], d_loss: 0.0000, g_loss: 27.6310, D(x): 1.00, D(G(z)): 0.00
Epoch [0/200], Step[160/600], d_loss: 0.0000, g_loss: 27.6310, D(x): 1.00, D(G(z)): 0.00

my Discriminator and Generator is define as:class Discriminator(nn.Module):
def init(self, in_channels, df_dim, im_size):
super(Discriminator, self).init()
self.conv1 = Conv2d_Relu(in_channels, df_dim, 3, 1)
self.conv2 = Conv2d_BatchNorm(df_dim, df_dim2, 3, 1)
self.conv3 = Conv2d_BatchNorm(df_dim
2, df_dim4, 3, 1)
self.conv4 = Conv2d_BatchNorm(df_dim
4, df_dim*8, 3, 1)
self.fc1 = nn.Linear(im_size * im_size * df_dim * 8, 1)

def forward(self, x):
    out = self.conv1(x)
    out = self.conv2(out)
    out = self.conv3(out)
    out = self.conv4(out)
    out = out.view(out.size(0), -1)
    out = self.fc1(out)
    return F.sigmoid(out)

class Generator(nn.Module):
def init(self, in_channels, f_dim=256):
super(Generator, self).init()
self.f_dim = f_dim
self.fc = nn.Linear(in_channels, f_dim * 3 * 3)
self.bn1 = nn.BatchNorm2d(f_dim, momentum=0.01)
self.deconv1 = nn.ConvTranspose2d(f_dim, f_dim / 2, 3, 2)
self.bn2 = nn.BatchNorm2d(f_dim / 2, momentum=0.01)
self.deconv2 = nn.ConvTranspose2d(f_dim / 2, f_dim / 4, 3, 2)
self.bn3 = nn.BatchNorm2d(f_dim / 4, momentum=0.01)
self.deconv3 = nn.ConvTranspose2d(f_dim / 4, 1, 2, 2, 1)

def forward(self, x):
    out = F.relu(self.fc(x))
    out = out.view(out.size(0), self.f_dim, 3, 3)
    out = F.relu(self.bn1(out))
    out = F.relu(self.bn2(self.deconv1(out)))
    out = F.relu(self.bn3(self.deconv2(out)))
    out = self.deconv3(out)
    return F.tanh(out)

it looks like your model is running in a correct way, but the learning rate is too big.

Try it https://arxiv.org/pdf/1511.06434.pdf

I have fix my problem by remove the last layer of my discriminator:smiley: