Hi,
I’m using an _evaluate_model method I wrote up to try and log the training, dev and test data loss during each epoch.
The function is :
def _evaluate_model(some_datacut, name_of_datacut):
curr_epoch_loss = 0
with torch.no_grad():
for sentence, tags in some_datacut:sentence_in = prepare_sequence(sentence, word_to_ix) targets = torch.LongTensor([tag_to_ix[t] for t in tags]) loss = model.neg_log_likelihood(sentence_in, targets) curr_epoch_loss+=loss print(name_of_datacut+" Loss ",curr_epoch_loss.item()) return
I was seeing something funny when printing the values out.
The EPOCH loss is on the training dataset.
The 3 evaluate calls on training data are just to show how the loss values keep changing, even though I iterate throught the same training data and the model does not change.
for epoch in range(300):
_train_model(epoch) _evaluate_model(training_data,"TRAINING_1") _evaluate_model(training_data,"TRAINING_2") _evaluate_model(training_data,"TRAINING_3") _evaluate_model(dev_data,"DEV") _evaluate_model(testing_data,"TESTING")
I get this!
Epoch 1 Loss : 26.947420120239258
TRAINING_1 Loss 26.533092498779297
TRAINING_2 Loss 26.546777725219727
TRAINING_3 Loss 26.50927734375
DEV Loss 8.607915878295898
TESTING Loss 9.82412338256836Epoch 2 Loss : 26.41505241394043
TRAINING_1 Loss 25.940399169921875
TRAINING_2 Loss 26.062667846679688
TRAINING_3 Loss 25.959278106689453
DEV Loss 8.684854507446289
TESTING Loss 9.67626667022705Epoch 3 Loss : 26.273456573486328
TRAINING_1 Loss 25.535442352294922
TRAINING_2 Loss 25.73399543762207
TRAINING_3 Loss 25.672330856323242
DEV Loss 8.726755142211914
TESTING Loss 9.62713623046875
I was expecting the EPOCH loss and the 3 iterations of the Training loss to be exactly equal! Help!