Trying to run the first model in using GPU


I am trying to run the first model in on GPU, but could not.

The notebook is at

The code works if ‘dev = torch.device(“cpu”)’, but for ‘dev = torch.device(“cuda”)’, weights.grad is None.

I browsed around and found that for non-leaf Variables grad is not stored, but here bias, and weights are leaf level Tensors, so don’t know why it fails.

Thanks in advance.

Same code works on “cpu”, but on “cuda”, it has following error:

TypeError Traceback (most recent call last)
16 loss.backward()
17 with torch.no_grad():
—> 18 weights -= weights.grad * lr
19 bias -= bias.grad * lr
20 weights.grad.zero_()

TypeError: unsupported operand type(s) for *: ‘NoneType’ and ‘float’

Check if both lr and weights.grad is present and is not None in the code.

Thanks Ashok, I check lr is not None, but weights.grad is None. The same code works well if the device is “cpu”.