Epochs - Iterations - Batch Size do not match

I have batch the number of epochs to 300, the batch size to 128 and therefore, the number of iterations should be:

iterations = epochs * batchsize = 38400

However, I can see that in a forward pass the iterations has increased more than 128.

def train(dataset, name, model, optimizer, criterion, device, trainloader, validloader,
          epochs, iters, save, paths, save_frequency=1, test=True, validate=True):
    
    j = 0            # Iterations counter
    model.train()
    for epoch in range(1, epochs+1):
               
        # Training
        for i, (images, labels) in enumerate(trainloader):
            
            j += 1
            
            # Forward pass
            # Backwad pass
            # Calculate loss
            # Calculate accuracy
        
        stats = [epoch, epochs, j, iters, lss, acc]
        print('\n Train: Epoch: [{}/{}] Iter: [{}/{}] Loss: {} Acc: {}%'.format(*stats))   
        
        # Validation
        if validate:
           
            for k, (images, labels) in enumerate(validloader):
            
                # Forward pass
                # Backwad pass
                # Calculate loss
                # Calculate accuracy
                
            # Save model and delete previous if it is the best

        stats = [epoch, epochs, j, iters, lss, acc]
        print('\n Valid: Epoch: [{}/{}] Iter: [{}/{}] Loss: {} Acc: {}%'.format(*stats))

However, the result of this is printing:

Train: Epoch: [1/300] Iter: [352/38400] Loss: 1.564 Acc: 48.61%

 Valid: Epoch: [1/300] Iter: [352/38400] Loss: 1.699 Acc: 34.28%

 Train: Epoch: [2/300] Iter: [704/38400] Loss: 1.398 Acc: 44.44%

 Valid: Epoch: [2/300] Iter: [704/38400] Loss: 1.973 Acc: 47.44%

 Train: Epoch: [3/300] Iter: [1056/38400] Loss: 1.311 Acc: 56.94%

 Valid: Epoch: [3/300] Iter: [1056/38400] Loss: 1.283 Acc: 58.06%

What am I missing?
Thanks in advance

The number of iteration per epoch is calculated by number_of_samples / batch_size.
So if you have 1280 samples in your Dataset and set a batch_size=128, your DataLoader will return 10 batches à 128 samples. Therefore the iterations will increase by 10.
As a small side note: the last batch might be smaller if drop_last=False in your DataLoader, if the division returns a remainder.

Thank you,

I think the confussion comes from different terminology.
I am in concrete following ResNet paper on CIFAR 10 when they have fixed the number of iterations to 64000 and batch_size to 128.

However, I think they make a distinction between iteration and batches.

Batches will be the number of batches the dataloader will throw per epoch:
Batches = Samples / Batch Size = 45000 / 128 ≈ 352

Then, the total iterations will be the batches per epoch times all the epochs:
Iterations = Epochs * Batches

Since the have fixed the number of iterations to 64000, the computation is:
Epochs = Iterations / Batches = 64000 / 352 ≈ 181

Do you agree or am I missing something?

Thanks!