I am getting the following error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
What I am currently doing is the following: I need to calculate the forward passes of my network in advance (that is, all of them before doing any backward pass). I want to store these tensors in memory, keeping their gradient graphs intact, so that when I am doing my training phase, I can rely on them.
A bit of more detail. As I perform the forward passes, I am storing the outputs of the networks in a dictionary which at every key contains a bag / group of outputs from my network:
d[bag_i] = torch.cat((d[bag_i], outputs[i]))
After I am done with this, I do the following reshaping:
for i in d: d[i] = d[i].view(-1, 100)
For training, I am sampling elements from these bags, and I want to use the tensors here to compute the loss for each batch, without running the forward passes while I am doing it.
It looks like I might be performing an inplace operation somewhere, so I am wondering if this way of concatenating the tensors is legal to keep the graph properly.