I have a trained a CRNN model containing LSTM, batch norm and convolutional layers. Using the trained model I’m trying to train the input of the model. (Trained model is loaded from the disk)
image = torch.rand(1, 1, input_size, input_size).cuda() loss_function = CTCLoss().to(device) optimizer = optim.Adam([image.requires_grad_()], lr=lr)
Image is saved after training. This saved image generates a different output with a newly loaded model than the model used to train it. By using a single image I discovered that this difference happens only if
optimizer.step() is called.
Am I missing something here? Does optimizer change the model even this case? (I haven’t included model params in optimizer)