# How to calculate epoch loss

Hi everyone, if I want to print epoch loss (i.e. loss after every 300 epochs) then how should I modify this code?
And what is difference between running and epoch loss?

``````def train_model(train_dl, model):
criterion = MSELoss()                                               #loss function
optimizer = Adam(model.parameters(), lr=0.001, betas=(0.9, 0.999))  #optimizer should be used
for epoch in range(300):
running_loss = 0.0
for i, (inputs, targets) in enumerate(train_dl):
pred = model(inputs)
loss = criterion(pred, targets)
loss.backward()
optimizer.step()
#print ("loss.item",loss.item())
running_loss +=loss.item()
epoch_loss += pred.shape*loss.item()
print('Finished Training')
``````
1. To print every 300, check for the epoch value:

def train_model(train_dl, model):
criterion = MSELoss() #loss function
optimizer = Adam(model.parameters(), lr=0.001, betas=0.9, 0.999)) #optimizer should be used

``````for epoch in range(300):
running_loss = 0.0
for i, (inputs, targets) in enumerate(train_dl):
pred = model(inputs)
loss = criterion(pred, targets)
loss.backward()
optimizer.step()
if i%300 ==0:
print ("loss.item",loss.item())
running_loss +=loss.item()
epoch_loss += pred.shape*loss.item()
``````

print(‘Finished Training’)

1. On difference between running and epoch loss, please refer this link. Although they refer to the running_loss (epoch loss in your case), the concept should make things clear to you.

In your example, running_loss is the aggregated loss per mini-batch, whereas, epoch_loss is to get loss undoing the reduction …

1 Like