[Please Help] in-place operation in module

When I use the module:

class AngleLoss(nn.Module):
    def __init__(self, gamma=0):
        super(AngleLoss, self).__init__()
        self.gamma   = gamma
        self.it = 0
        self.LambdaMin = 5.0
        self.LambdaMax = 1500.0
        self.lamb = 1500.0

    def forward(self, input, target):
        self.it += 1
        cos_theta,phi_theta = input
        target = target.view(-1,1) #size=(B,1)

        index = cos_theta.data * 0.0 #size=(B,Classnum)
        index.scatter_(1,target.data.view(-1,1),1)
        index = index.byte()
        index = Variable(index)

        self.lamb = max(self.LambdaMin,self.LambdaMax/(1+0.1*self.it ))
        output = cos_theta * 1.0 #size=(B,Classnum)
        output[index] -= cos_theta[index]*(1.0+0)/(1+self.lamb)
        output[index] += phi_theta[index]*(1.0+0)/(1+self.lamb)

        logpt = F.log_softmax(output)
        logpt = logpt.gather(1,target)
        logpt = logpt.view(-1)
        pt = Variable(logpt.data.exp())

        loss = -1 * (1-pt)**self.gamma * logpt
        loss = loss.mean()

        return loss

I will get error:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 2]], which is output 0 of MulBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

It seems that the error part is

        output[index] -= cos_theta[index]*(1.0+0)/(1+self.lamb)
        output[index] += phi_theta[index]*(1.0+0)/(1+self.lamb)

How to modify the code?

You can try:
output[index] = output[index] - cos_theta[index](1.0+0)/(1+self.lamb)
output[index] = output[index] + phi_theta[index]
(1.0+0)/(1+self.lamb)

Thank you! I have tried this method. It will not solve my problem.
I finally solve it by adding .clone() to output.

output = cos_theta * 1.0 
output1 = output.clone()
output[index] = output1[index]-cos_theta[index]*(1.0+0)/(1+self.lamb)
output[index] = output1[index] phi_theta[index]*(1.0+0)/(1+self.lamb)

It seems that x = x+1 is not in_place operation, but x[index] = x[index]+1 is in_place operation.
To be honest, I don’t quite understand why.

Hi,

When you do x[index] = anything, it is an inplace operation because you modify only part of x. The same thing happens if you do this with a python list for example.