RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [5, 100]] is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
I can’t find what’s exactly wrong.
Here is stack trace:
C:\Users\matty\PycharmProjects\my-proto-tc\proto_net.py:82: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
sentence_tensors.append(torch.tensor(encoded_support_set[i][j].clone().detach().requires_grad_(True)))
Warning: Traceback of forward call that caused the error:
File "C:/Users/matty/PycharmProjects/my-proto-tc/train.py", line 29, in <module>
outputs = model.forward(x_in, x_ood, support_set, labels)
File "C:\Users\matty\PycharmProjects\my-proto-tc\proto_net.py", line 89, in forward
dists = self.distance_function(x_in, prototypes)
File "C:\Users\matty\PycharmProjects\my-proto-tc\proto_net.py", line 47, in cosine_similarity
return F.cosine_similarity(x, y)
(print_stack at ..\torch\csrc\autograd\python_anomaly_mode.cpp:57)