So, I was following this Hidden Markov Model Tutorial. But the limitation it has is that it breaks if PyTorch is > 1.5.0 it throws: https://colab.research.google.com/drive/1IUe9lfoIiQsL49atSOgxnCmMR_zJazKI#scrollTo=3CMdK1EfE1SJ
While training the forward algorithm
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [64, 1024]], which is output 0 of TransposeBackward0, is at version 23; expected version 22 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Setting set_detect_anomaly doesn’t give any other output, but the thing is transpose is not used in any funky way where any in-place operation is done? is it?
Plus, it is working without any error, when Pytorch is downgraded to 1.5.0.