Why does the PyTorch Optimiser takes a iterable parameter instead of a single high-dimensional tensor?

I am new to PyTorch and still learning it. While I am trying to use the PyTorch optimiser, I noticed optimiser only accept iterable parameters.

I am wondering what is the reason behind to design PyTorch optimiser in this way? Why not just accept a single tensor with one added dimension?


Here is my weight:

w = torch.tensor(0, dtype = torch.float32, requires_grad = True)
optimiser = torch.optim.SGD(w,lr = learning_rate)

It raise the below error:
TypeError: params argument given to the optimizer should be an iterable of Tensors or dicts, but got torch.FloatTensor

When I pass the w as a list into the optimiser, everything works great.

My guess would be that iterables are expected to be compatible with the return value of model.parameters() in the initial design.
I don’t know if there are any other technical reasons.

1 Like