Modifying the parameters of a layer manually

Hi,

I used to train the last fully connected layer manually and write it back to the model as follows:

for param in model.fc.parameters():
    fc_params = param.data.clone().detach().t()

fc_params = custom_train(fc_params)

state_dict = model.fc.state_dict()
for name, param in state_dict.items():
    state_dict[name].copy_(fc_params.t())

I calculate the gradients in my own function. So I define the optimizer without the last layer:

optimizer = torch.optim.Adam([{"params": model.layer1.parameters(), "lr": lr},
                              {"params": model.layer2.parameters(), "lr": lr}])

I have just updated to PyTorch 1.5 and now it doesn’t work. The error message says:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.DoubleTensor [50, 16]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

What is the proper way of doing this?

Thank you.