Hi all,
Even though there are multiple answers, I will explain my problem here. I am facing the following error message:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [4, 64, 3, 3]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
It occurs when trying to compute the gradients for the second backward pass. I have the suspicion it has something to do with the fact that I am keeping state variables of an LSTM feature map creator as self objects inside an nn.Module. Here is the code snippet of my forward loop:
def forward(self, z, xlr=None, logdet=0, logpz=0, eps=None, reverse=False,
use_stored=False):
self.h_new, self.c_next = self.conv_lstm(z, (self.h, self.c))
# Encode
if not reverse:
for i in range(self.L):
print("Level", i)
for layer in self.level_modules[i]:
if isinstance(layer, modules.Squeeze):
z = layer(z, reverse=False)
self.h_new = layer(self.h_new, reverse=False)
elif isinstance(layer, FlowStep):
z, logdet = layer(z, lr_feat_map=self.h_new, # lr_downsampled_feats[i + 1], # TODO: change this part
x_lr=xlr, logdet=logdet, reverse=False)
elif isinstance(layer, modules.GaussianPrior):
z, logdet, logpz = layer(z, logdet=logdet, logpz=logpz,
lr_feat_map=self.h_new, #lr_downsampled_feats[i + 1],
eps=eps, reverse=False)
self.h = self.last_squeezer(self.h_new, reverse=True)
self.c = self.c_next
Do you think it would be the best to just pass on the hidden and context states through the function outputs? I have already set loss.mean().backward(retain_graph=True) and skimmed the code for other inplace operations.
Any help would be much appreciated !! Please let me know if further code snippets are required.