Why the training and the Test goes in the same loop

Hi I am new in pytorch, I was wondering why should we use the same epoch loop for train and test

for epoch in range(1, args.epochs + 1):
model.train_(train_loader, epoch)
model.test_(test_loader, epoch)
and how is that different from using 2 loops ? thank you

for epoch in range(1, args.epochs + 1):
model.train_(train_loader, epoch)

for epoch in range(1, args.epochs + 1):
model.test_(test_loader, epoch)

Usually you would like to validate your model after each training epoch to get a signal about your model’s ability to generalize, i.e. how high the validation accuracy is (or how low the validation loss gets). Using these validation metrics you could apply e.g. early stopping in order to stop the training once your model starts to overfit.
If you train for some epochs and try to validate (or test) your model afterwards, you will just get the final validation metrics. Running the validation (or test) for several epochs sequentially doesn’t make sense, as the metrics won’t change and you’ll end up with the same values.

2 Likes

Thank you for your reply but why for the validation we focus only on accuracy best score to save the model what about the loss , also some implementations uses
for data, target in test_loader:

instead of

for batch_idx, (data, target) in enumerate(test_loader):

why ?

Thank you

In validation phase we care mostly about the general performance of the model rather than the loss. During training phase tracking training loss is very useful to see how the model actually process the data, but what is most important loss is strictly tied to backpropagation and updating weights, thus calculating loss is obligatory for training, but not for validation.
But sometimes if you want to know better how the model behaves under the validation data it is also useful to keep track of test loss.

In the code you provided the only difference is that in

for data, target in test_loader:

you simply load batch (data and target labels) and don’t keep track of the index of the batch, but in

for batch_idx, (data, target) in enumerate(test_loader):

you load batch but also batch index which is sometime useful to keep track of progress or whatnot. You can check why the author went with keeping track of batch indexes simply by looking where he/she used it later in code.

Hope I helped

2 Likes

Thank you you saved the day

Thank you I appreciate it