Loss doesn’t Decrease and the output is zero

I’m trying to implement an autoencoder in pytorch but all my outputs are zero and i don’t know why :slight_smile:

here is my code for the autoencoder:


class autoencoder(nn.Module):
    def __init__(self):
        super(autoencoder, self).__init__()
        self.encoder = nn.Sequential(
            nn.Linear(686, 256),
            nn.ReLU(),
            nn.Linear(256, 64),
            nn.ReLU())
        self.decoder = nn.Sequential(
            nn.Linear(64, 256),
            nn.ReLU(),
            nn.Linear(256, 686),
            nn.ReLU())
          
    def forward(self, x):
        x = self.encoder(x)
        x = self.decoder(x)
        return x

and here is the training process:

iterations = 10
learning_rate = 0.98
criterion = nn.MSELoss()

optimizer = torch.optim.Adam(
    net.parameters(), lr=learning_rate, weight_decay=1e-5)


for epoch in range(iterations):
    runningLoss = 0.0
    for i, data in enumerate(train_dl, 0):
        inputs, labels = data
        if use_gpu:
            inputs = Variable(inputs.view(-1,686).double()).cuda()
        else:
            inputs = Variable(inputs.view(-1,686).double())
        outputs = net(inputs)
        loss = criterion(outputs, inputs)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        
        runningLoss += loss.data.item()
    
    print(f'at iteration: {epoch+1}/{iterations}; BC Error: {runningLoss}')
print('Finished Training')

Could you try to remove the last relu and run the code again?

ok, i tried it, this time the outputs aren’t zero anymore but it seems like it doesn’t converge, and the error is very high.

That’s a good starting point to play around with some hyperparameters, e.g. lowering the learning rate etc.

Thank you very much, will update here if I have more questions :v:t3: