Inception v3 RuntimeError with torch 1.0.0

I am running exactly the same code with torch==0.4.1 and torch==1.0.0.

torch==0.4.1: my code works with ResNet101 and Inception v3
torch==1.0.0: my code works with ResNet101 but Inception v3 fails with this stack trace:

Traceback (most recent call last):
  File "/home/anianruoss/experiments/imagenet/", line 100, in <module>
    flows_x0, optimizers[args.optimizer]
  File "/home/anianruoss/stAdv_pytorch/", line 189, in pytorch_wrapper
  File "/home/anianruoss/venv/lib/python3.6/site-packages/torch/optim/", line 58, in step
    loss = closure()
  File "/home/anianruoss/stAdv_pytorch/", line 183, in closure
  File "/home/anianruoss/venv/lib/python3.6/site-packages/torch/", line 102, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/anianruoss/venv/lib/python3.6/site-packages/torch/autograd/", line 90, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

This is roughly what my code does:

optimizer = torch.optim.Adam([flows])

def closure():
    loss = CustomLoss(flows).sum()
    return loss

for i in range(steps):

Do you know what could be the reason for this?

1 Like

Did you solve it? I had the same issue I have no idea.

Please try to use PyTorch 1.0.1.

Thank you for your help @Tony-Y. Unfortunately the problem remains even with torch==1.0.1.post2 and torchvision==0.2.1.

Maybe, you have to modify CustomLoss slightly. An issue due to a custom loss has been reported:

He resolved this problem by cloning tensors in the custom loss.

That seems to solve the problem. Thanks! However, I am still confused as to why this occurs with Inception v3 and not with ResNet-101.

This article might help you to understand why it needs cloning.