Error : one of the variables needed for gradient computation has been modified by an inplace operation

I implemented the code like below:

for layer in self.model.features:
    if isinstance(layer, torch.nn.Conv2d):
        kernel_vec = layer.weight.reshape(layer.out_channels, -1)
        kernel_norm = torch.norm(kernel_vec, p=1, dim=1)
        for i in range(kernel_norm.shape[0]):
            kernel_vec[i] = kernel_vec[i] / kernel_norm[i]
        inner_prod = torch.mm(kernel_vec, kernel_vec.t())
        inner_prod = inner_prod - torch.diag(inner_prod)
        inner_loss = torch.sum(torch.abs(inner_prod))
        loss += inner_loss

optimizer.zero_grad()
loss.backward()
optimizer.step()

But it doesn’t work. I have the error code like this:

one of the variables needed for gradient computation has been modified by an inplace operation

I found this error occurred by the in-place operation. But I don’t know how fix the code…
Could you help me?

Hi,

Which version are you using? This error in the latest version contains much more informations than that :slight_smile:
Also you can enable the anomaly detection to know which forward methods had a Tensor it needs to use that was modified inplace.

Thank you for reply.
I use the pytorch 1.2
Full error code is this:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4608]], which is output 0 of SelectBackward, is at version 514; expected version 513 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

and if you don’t mind, could you let me know how use anomaly detection?

You might want to upgrade your pytorch version to get a more useful error message.

The doc is here, but you can simply add at the beginning of your file (before the forward pass) torch.autograd.set_detect_anomaly(True).

Thank you for your advice!