Query : Loop for batch per epoch

Hello all,
I have query regarding the placement of optimizer.zero_grad() and optimizer.step()

for idx in range(epoch):
for kdx in range(batch):
y_pred = model(X)
loss = loss_fn(y_pred,y_target)
optimizer.zero_grad()
loss.backward()
optimizer.step()

Do we need to call them once for every epoch or for every batch iteration?

It depends on your use case.
The usual approach would be to zero out the old gradients, calculate the new gradients via backward(), and update the parameters via optimizer.step() for each batch (so once per iteration).

However, you could simulate a larger batch size by accumulating the gradients using multiple batches and call the complete update steps after a specific number of iterations.

Thanks for the reply