I have got error message when I tried to run my code as:
"RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation"
However, I can’t find any inplace operation on my code.
Here’s my code:
def forward(self, sentence, state): candidates = state.clone() for j in range(self.nslots): w_j = self.key_vector_FC[j](Variable(torch.Tensor())) gate_j = F.sigmoid(state[j] * sentence + state[j] * w_j) update = F.relu(self.U(state[j]) + self.V(w_j) + self.W(sentence), inplace=False) candidates[j] = candidates[j] + gate_j * update norm = candidates[j].norm(p=2, dim=0, keepdim=True) # .detach() candidates[j] = candidates[j].div(norm) # problem here! state = candidates return state
I tried to replace the division operation with
state[j] = state[j].div(2)
and the problem solved.
And I tried to use the .detach() method, which makes no efforts.