Accuracy ~ 0.009 on test set after not touching my code for a week

Hello,
I trained a model a while ago now, I got about 0.84 on the test set while testing the same day I trained.
After pausing my work for a few weeks I decided to get back to it and retested everything. Loaded my model and whatnot. eval mode, inference, blablabla and my accuracy seems to be around 0.009 (compared to 0.65 when I tested it)
Is it the dataset splits? No, this has been verified and verified again by both my supervisor and myself.

Some help would be immensely appreciated!

I think the code is quite self-explanatory and compartmented so here’s the class I use for anything linked to training, testing etc.

Context: The code trains multiple backbones on different modalities for action recognition in videos. Do not mind the code for consensus methods, that one isn’t the issue here.

Thanks for your support!

Cheers,

Usually these issues arise if something changed in the data loading or processing stage or if the wrong pre-trained parameters are loaded. You could thus make sure to load the proper state_dict (assuming you stored multiple ones) and also check the training accuracy to make sure this metric is at least looking expected. If that’s the case, check for differences between the training and test dataset processing steps.

1 Like

I doubt anything changed in the data loading stage. I also made sure to use the same seed.

You could thus make sure to load the proper state_dict (assuming you stored multiple ones)

I did make sure that I wasn’t loading the wrong model and sadly I am not.

check the training accuracy to make sure this metric is at least looking expected.

I also checked the training accuracy. The training accuracy must be correct since it worked a few weeks ago and I have not changed anything in the code since then.

Thank you for your help anyway!