Question about batch in enumerate(dataloader)

Hello, sir. I am running a multiclass classification model on pytorch by using my customize dataset. The size of my dataset is 1000, and I use 750 for training. My model can run successfully, but there will be a problem when displaying the number. I think the bug may appear here for batch, (data, label) in enumerate(dataloader), I printed the batch but it is always 0.

Here is part of the code:


def train_loop(dataloader, model, loss_fn, optimizer):

    size = len(dataloader.dataset)

    for batch, (data, label) in enumerate(dataloader):

        data = data.to(device)

        label = label.to(device)

        # Compute prediction and loss

        output = model(data)

        label = label.squeeze(1)

        

        loss = loss_fn(output, label)

        # Backpropagation

        optimizer.zero_grad()

        loss.backward()

        optimizer.step()

        if batch % 100 == 0:

            # print(batch)

            loss, current = loss.item(), batch * len(data)

            print(f"loss: {loss:>7f}  [{current:>5d}/{size:>5d}]")

    schedulerR.step(loss)

Here is some output:


dataset: 1000

Epoch 1

loss: 4.898516  [    0/  750]

Test Error: 

 Accuracy: 21.2%, Avg loss: 2.147434

Epoch 2

-------------------------------

loss: 2.177498  [    0/  750]

Test Error: 

 Accuracy: 18.8%, Avg loss: 2.567600

Epoch 3

-------------------------------

loss: 2.530193  [    0/  750]

Test Error: 

 Accuracy: 19.2%, Avg loss: 2.736725

I think in the normal circumstances, 0 will continue to grow. Could you tell me how can I fix it?

Thanks for reading.

Based on the output it seems that you are printing the loss only once per epoch, so when batch = 0.
I don’t know how large the batch_size is, but I assume batch=100 won’t be reached for the second output in an epoch.

Thank you for reply. I change it to the batch > 0, then it works well. Thank you so much.