Backward error and Runtime Error

Any suggestions on how to debug the following? Thank you!!

Traceback (most recent call last):
File “train.py”, line 90, in
g_loss.backward()
File “/Users/zxiao/opt/anaconda3/lib/python3.7/site-packages/torch/tensor.py”, line 198, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “/Users/zxiao/opt/anaconda3/lib/python3.7/site-packages/torch/autograd/init.py”, line 100, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 1024, 1, 1]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

Could you post a minimal, executable code snippet to reproduce this issue, please?