I am new to PyTorch and still learning it. While I am trying to use the PyTorch optimiser, I noticed optimiser only accept iterable parameters.
I am wondering what is the reason behind to design PyTorch optimiser in this way? Why not just accept a single tensor with one added dimension?
Here is my weight:
w = torch.tensor(0, dtype = torch.float32, requires_grad = True)
optimiser = torch.optim.SGD(w,lr = learning_rate)
It raise the below error:
TypeError: params argument given to the optimizer should be an iterable of Tensors or dicts, but got torch.FloatTensor
When I pass the w
as a list into the optimiser, everything works great.