Torch parameter gradient is None

I’m trying to optimize a torch parameter using a pre-trained network.
This is my code.

single_frame = image[0,:,:,:].to(device)
modif = torch.Tensor(1, 2, 34, 34).fill_(1).to(device)
modifier = torch.nn.Parameter(modif, requires_grad=True)
optimizer = torch.optim.Adam([modifier], lr=learning_rate)
criterion = torch.nn.CrossEntropyLoss()
iters = 10
for i in range(iters):

    # single_frame =
    modified_frame = single_frame + modifier
    if label.dim() == 0:
        label = label.unsqueeze(0).to(device)
    label =
    with torch.no_grad():
        output = model(modified_frame)
    output = output.requires_grad_(True)
    loss = criterion(output,label)

The output shows that the modifier does not change, and the print shows that modifier’s grad is None.
What might be the cause?

You are performing the forward pass of your model in a no_grad() context and are afterwards setting the .requires_grad attribute of the output to True, which won’t attach it to the computation graph. Perform the forward pass in the global context and it should work.

1 Like