This is my class definition:
class Tracker(nn.Module): def __init__(self): super(Tracker, self).__init__() self.bigru = nn.GRU(input_size=2, hidden_size=100, batch_first=True, bidirectional=True) self.fc1 = nn.Linear(200, 32) self.fc2 = nn.Linear(32, 2) def forward(self, inputs): x, states = self.bigru(inputs) x = self.fc1(x[:, -1, :]) x = self.fc2(x) return x
While training I use
loss.backward(retain_graph=True). But I get the error
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation.
I went through couple of discussions on the topic and realized it was due to the in-place operation of a tensor.So I switched to the following code.
def forward(self, inputs): x, states = self.bigru(inputs) k = x.clone() y = self.fc1(k[:, -1, :]) z = self.fc2(y.clone()) return z
The error still persists. Can anyone tell me where am I going wrong? Thanks!