How is this an inplace operation error?

I am running into this problem again and again:

Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor []], which is output 0 of AsStridedBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

The code as shown by the backtrace is the following-

v = self.id_tensor_list(self.d)
S_n = 0
for i in range(self.d):
    S_n = S_n + torch.mul(torch.diag(v[i]), torch.exp(self.phi[i][2] * torch.log(torch.exp(self.phi[i][3]) + self.Y[n][i])))
return S_n

The backtrace tells me that the line

S_n = S_n + torch.mul(torch.diag(v[i]), torch.exp(self.phi[i][2] * torch.log(torch.exp(self.phi[i][3]) + self.Y[n][i]))) 

operation is the issue, but I don’t understand how is this an inplace operation?

v is a list of tensors, Y is a tensor, phi is a tensor with requires_grad=true

How do I fix this issue?

Can you please post some code that reproduces the error?
From this piece of code, I cannot really figure out where the in-place op is taking place as all functions you used, viz. exp, log, mul, diag return new tensors only.

I tried reproducing the error using a very similar code, but it ran error free meaning I wasn’t able to reproduce what you are trying to do.