Torch.autograd.grad makes params None

My code is the following:

grad_0 = list(torch.autograd.grad(losses[0], shared_parameters, retain_graph=True)
grad_0 = torch.cat([torch.flatten(grad) for grad in grad_0])

grad_1 = list(torch.autograd.grad(losses[1], shared_parameters, retain_graph=True)
grad_1 = torch.cat([torch.flatten(grad) for grad in grad_1])

losses is a list containing losses from two tasks. I get the error in second line saying ValueError: grad requires non-empty inputs. I try printing shared_parameters after first line, and it doesn’t print. What happens to shared_parameters after using it with autograd ? I tried using deep copy, I get the error saying cannot pickle generator object.

I am not sure why does this happen ? How can I re-use shared_parameters ? I think passing a deep copy may make code behave differently.

1 Like

Your generator is exhausted so you would either need to recreate it or use e.g. a list instead:

model = models.resnet50()

shared_parameters = list(model.parameters())
losses = [model(torch.randn(1, 3, 224, 224)).mean(), model(torch.randn(1, 3, 224, 224)).mean()]

grad_0 = list(torch.autograd.grad(losses[0], shared_parameters, retain_graph=True))
grad_0 = torch.cat([torch.flatten(grad) for grad in grad_0])

shared_parameters = model.parameters()
grad_1 = list(torch.autograd.grad(losses[1], shared_parameters, retain_graph=True))
grad_1 = torch.cat([torch.flatten(grad) for grad in grad_1])

# minimal code snippet to reproduce
shared_parameters = model.parameters()
for p in shared_parameters:
    print(p.sum())
    
# empty
for p in shared_parameters:
    print(p.sum())
1 Like

Thank you very much. PyTorch community is lucky to have you.

1 Like

Thank you a lot!!!
I have struggled with problem for a whole day!

1 Like