Same; whether you use SGD, Adam, RMSProp etc.
Typically I use optimizer.zero_grad().
optimizer.zero_grad()