Why this code will throw out an inplace error

    A = torch.nn.Parameter(torch.randn(3, 3))
    B = torch.randn(2, 4, 3)
    B[0, :, :] = B[0, :, :].mm(A)
    loss = B.sum()

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [4, 3]] is at version 1; expected version 0 instead.
How could I solve this problem? :slight_smile:


In this line, you are updating B variable but autograd needs all previous values to compute the backpropagation.

You can do something like this to ensure you are not losing the value of B:

import torch

A = torch.nn.Parameter(torch.randn(3, 3))
B = torch.randn(2, 4, 3)
z = B[0, :, :].mm(A)
loss = z.sum()

When I assign updated B to z, I am not mutating B so autograd has access to original B.

Best regards

Thanks for your kind help. Actually, the code is the simplified version of my problem. In the real situation, I cannot assign B[0, :, :].mm(A) to z. However, following your inspiration, I find use .clone() can solve my problem :slight_smile:

1 Like

Thank you for sharing your information. Actually, I have not had such a problem before and I just found a solution by rule of thumb after reading inplace definition doc. Your approach is the conventional one.

Good luck