Optimizer step function with torch.no_grad() decorator

I am working on writing my own optimizer while going through the code for default optimizers such as RMSprop I found a torch.no_grad() decorator just before the step function. Is it necessary to use this decorator before the step function when I am writing my own optimizer’s step function? Can someone please explain why it is used over the step function.

if you wouldnt call no_grad() the update operation would require gradients and you probably dont want to add your weight update in the computational graph.

Understood, Thanks for the reply.