What is the point of torch.no_grad in this weird place?

The following is from official tutorial. My question is: what’s the point of no_grad here? My understanding is that it is only useful if you want to save costs while in running the forward pass.

with torch.no_grad():
    for param in model.parameters():
        param -= learning_rate * param.grad

The no_grad guard disables gradient calculations, so Autograd won’t track any operations applied in this block.
While it’s useful for inference, it can be also applied when you would like to manipulate some internal parameters without using the .data attribute.