I don’t understand how to calculate the running_loss
value when training a model.
In Training a classifier tutorial when training, the running_loss
is added with loss.item()
and set to 0.0 every 2000 mini-batches as follow:
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
But, in Transfer learning tutorial the running_loss
is added with loss.item() * inputs.size(0)
and it is not set to 0.0:
...
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
if phase == 'train':
scheduler.step()
epoch_loss = running_loss / dataset_sizes[phase]
What is the difference?