Hi
I wish to record the log likelihood for the validation set for a few iterations. Can anyone tell me how to go about doing it?
I am assuming you are using a loss function like NLLLoss to get the log likelihood.
criterion = nn.NLLLoss()
loss = criterion(output, target)
You can get the value of the loss using loss.item()
and store it in a list or array. For a nice example on how to keep track of your loss check out this line from the imagenet example.
Thanks a lot Viraat.
I wish to clarify my doubt once again. I actually want to record the likelihood of my validation set at the point where the error rate is minimum during training.
Using this likelihood as my reference I want to train my model once again now using new training set as combination of both train set and validation set (new train set = train set old + validation set) and continue training until the new log likelihood of the validation set ( validation set remains same as in the previous case ) matches the old recorded log likelihood of the validation set.
No problem Akhilesh.
Please let me know if what I understand is right.
I’m assuming you mean that you want to get the log likelihood on your validation set when the validation error is lowest during training.
You then want to re-train your model (not from scratch?) using a full dataset (train + validation) and train till new log likelihood of the validation set matches your previous minimum. You say that your validation set remains the same. Traditionally you would want your validation set not to include samples from your training set, but when you combine your train + validation set and use the same validation set, the model will be predicting for inputs it has already seen.
I’m curious to why exactly you would want to do something like that.
Nevertheless, I think you can keep a track of your validation_error
in your training loop and update the log likelihood whenever you find a lower validation_error
. Assume that the log likelihood at the lowest validation error is log_likelihood_target
.
Then you would re-train on the combined set till the log_likelihood
of your validation set is equal to log_likelihood_target
.
And yeah I want to retrain the model from scratch with the 60000 trainind data.