Control the training batch

Hello,

I’m working with an autoencoder and I want to visualize the output and the original image. I already did it but my problem is that everytime the images are plotted, are different images, so I cannot make a visual comparison of the improvement.

My code is next:

    # training and so on
for batch_features, _ in train_loader:
    batch_features = batch_features.view(-1, 784)
    
    optimizer.zero_grad()
    outputs = model(batch_features)
    train_loss = criterion(outputs, batch_features)
    train_loss.backward()

    optimizer.step() #update the weights (net.parameters)
    loss += train_loss.item()

loss = loss / len(train_loader)
print("epoch : {}/{}, loss = {:.6f}".format(epoch + 1, epochs, loss))

#VISUALIZATION
reconstructed = outputs.view(-1,1,28,28)
original = batch_features.view(-1,1,28,28)
img = T.ToPILImage()(reconstructed[0]) # plot the first element of the last batch
img2 = T.ToPILImage()(original[0])


plt.subplot(121)
imgplot = plt.imshow(img)
plt.subplot(122)
imgplot2 = plt.imshow(img2)
plt.suptitle("Reconstructed vs Original")
plt.show()

Every time I plot the images, the [0] image is different, why? I think I don’t exactly understand the concept of batch. Because as far as I understand, the [0] I plot should always be the [0] image of the last plot, but it is not.
How could I control the number of the batch that is inputing the NN? How could I plot everytime the same picture? For instance, the image [23] of the batch 30, or the image [540] of the whole training set?

Thank you so much!

I guess you might be shuffling the data in your training DataLoader, which would thus yield random samples in all batches.
If you want to use the same image, you could directly get it by indexing the dataset via: data, target = train_loader.dataset[index].