RuntimeError: assigned grad has data of a different type

I am trying to assign a ndarray of zeros to the grad of the model parameters, but I am getting the error in the title.

I was just trying to see if I can assign a numpy array to the param.grad. But in my project I won’t be assigning zeros actually.

for param in model.parameters():
            #print(param.grad)
            param.grad = torch.from_numpy(np.zeros(param.grad.shape)).type(param.grad.dtype)

I am trying to do this step after loss.backward() step.
I want to know what is causing this error and how to solve it.
Kindly help!

Adding model.to("cpu") before the for loop worked.

Thanks!!

(Also need to move the model back to the device before passing an input through the model)

Note that this comes from the fact that the Tensor from numpy was on the CPU, while the param was on the GPU.
Another way to solve this (without having to send the whole model back to the CPU) would be to send the new grad to the GPU as well with .to(param.device).

1 Like

I met this error again, and I found the true reason is your param.grad and new grad aren’t in the same device.
so the best method to solve this error is

param.grad = torch.from_numpy(np.zeros(param.grad.shape)).type(param.grad.dtype).to(param.device)
1 Like

to(param.device) is trully a solution !!!

1 Like