Should I use optimizer.step() or model.step() function to train my model

In pytorch, to update the model, should I use optimizer.step or model.step ? My question also valid for zero_grad() method?

Here is a example snippet:

import torch
import torch nn
class SomeNeuralNet(nn.Module):
    def __init__(self,hs,es,dropout):
          SomeNeuralNet(ClaimRecognizer, self).__init__()
          # Some initialization here
    def forward(x):
          # forward propagation here

model = SomeNeuralNet(es,hs,dp)
optimizer = optim.Adam(model.parameters())
loss_function = nn.NLLLoss()
for epoch in N:
   for x in data:
       # Which one I should call ? optimizer.zero_grad() or model.zero_grad() or both ?       
       model.zero_grad()
       optimizer.zero_grad()
      logp = model(x)
      loss = loss_function(logp,gold_outs)
      loss.backward()
       # Which one I should call ? Optimizer.step() or model.step() or both ?
       optimizer.step()
       model.step()

nn.Module doesn’t have a step method, so you should call optimizer.step().
The model itself doesn’t know anything about the optimization of its parameters.

In case of calling zero_grad it depends on your use case.
If you pass all parameters to the optimizer, both calls will be identical and clear all gradients.
In case you pass only certain parameters to the optimizer, optimizer.zero_grad() will only clear those, while model.zero_grad() will clear all parameters.

Thank you for the explanation.

Does the optimizer step() method change/update the optimizer? In other words, should I be saving/loading the optimizer as well as the model during the training process?

Yes, it might change some internal states, such as running estimates of the gradients if these are used, so that you should save the optimizer.stat_dict() additionally to the model.state_dict() to restore the training later.
In case the optimizer doesn’t use these estimates, saving the state_dict() wouldn’t change anything and I would recommend to use it anyway.