Https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/neural_networks_tutorial.ipynb

https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/neural_networks_tutorial.ipynb

Hi I am trying to understand the NN with pytorch.
I have doubts in gradient calculations…

import torch.optim as optim

create your optimizer

optimizer = optim.SGD(net.parameters(), lr=0.01)

# in your training loop:
optimizer.zero_grad()   # zero the gradient buffers
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step()    # Does the update

From the about code, I understood loss.backward() calculates the gradients.
I am not sure, how these info shared with optimizer to update the gradient.

Can anyone explain this…

Thanks in advance !

When you create an instance of the optimizer, you pass the parameters to it:

optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)

The optimizer holds references to these parameters and updates them when calling optimizer.step() using the .grad attribute.

2 Likes