Best practices for measuring loss

The pytorch tutorials I have been on normally just measure running_loss += loss.item() and don’t take into account a validation set. At the moment I am calculating the train error with running_loss += loss.item() and every n epochs I calculate the MSE on the validation set. I calculate the validation error by:

   y_val_predicted = net(X_va)
   temp1 = mean_squared_error(np.array(y_val_T.detach()), np.array(y_val_predicted.detach()))

This seems inefficienti, so I was wondering what other people do in this specific case but also more broadly tips on best practices for measing loss as the network is being trained.