Fixed learning rate and momentum of SGD in Pytorch?


How do I fix the learning rate and momentum of torch.optim.SGD in Pytorch? Does the momentum in torch.optim.SGD increase overtime?


Hey, please refer to the official docs here.
Learning rate shall be kept fixed if you aren’t using schedulers, and momentum is optional in SGD as stated in the docs.

So momentum is also fixed? How do you adjust momentum then?