How to calculate running_loss value

I don’t understand how to calculate the running_loss value when training a model.

In Training a classifier tutorial when training, the running_loss is added with loss.item() and set to 0.0 every 2000 mini-batches as follow:

running_loss += loss.item()
if i % 2000 == 1999:    # print every 2000 mini-batches
	print('[%d, %5d] loss: %.3f' %
		(epoch + 1, i + 1, running_loss / 2000))
	running_loss = 0.0

But, in Transfer learning tutorial the running_loss is added with loss.item() * inputs.size(0) and it is not set to 0.0:

	...
	# statistics
	running_loss += loss.item() * inputs.size(0)
	running_corrects += torch.sum(preds == labels.data)
if phase == 'train':
	scheduler.step()

epoch_loss = running_loss / dataset_sizes[phase]

What is the difference?

It depends how you would like to print the running loss value.

In the first example the current loss average of the batch will be added to running_loss, printed after 2000 iterations and reset.

The second example prints the loss for the complete epoch and is a bit more accurate.
Depending on the batch size and the length of your dataset, the last batch might be smaller than the specified batch size. If that’s the case, you could multiply the loss by the current batch size (loss.item() * inputs.size(0)) and divide by the total length of the dataset running_loss / dataset_sizes[phase].

@ptrblck thank you for your answer

Why 2000 iterations in particular?