It is common to split your data into training set and test set (where test set is much smaller than training set).
This way you can train on the training set and let the network learn from it while you test on the test set from which the network is not allowed to learn.
For how often you should test…
Well it’s really up to you. You could test once the whole training is finished or test once after every epoch to see how accuracy is changing over time.
If you test set is not too large I would recommend testing after every epoch.
You can test your network by running your whole (usually the whole if it is not too large) test set through it once.
The very very important difference to training is, that you do not use back propagation on the output you get from the test set! So no Tensor.backward()
on the output of your test data.
In fact, it is best practice to wrap the testing code into with torch.no_grad():
with torch.no_grad():
#testing code
this disables gradient calculation inside it. Which makes sure you cannot call Tensor.backward()
meaning you cannot learn from your test data and it can also reduce memory consumption.