I’m trying to export a trained copy of my model for use in another python script. However, when a sample of labelled training data is passed through the network it predicts the wrong class. This should not be the case as I am getting around 98% accuracy on both training and testing?
Another issue is that after the initial prediction, it will constantly predict the same class for every sample passed through the network. The only way to change the predictions after it gets stuck like that is to save a copy of the trained network again. ~ This could be an issue with Jupyter notebooks though? I have tried restarting the kernel and clearing all variables after trying each new sample.
All the data is preprocessed and scaled identically to the original training and testing script.
Is it even possible to use saved states in this way?
I am new to pytorch - so it is also likely that I have made some kind of grave error somewhere else.
After training the network I am saving the model state dictionary:
# Save trainined network PATH = "models/ANNKDD.pt" torch.save(network.state_dict(), PATH)
From my understanding state_dict saves a dictionary of weights of the network at the moment the save function is called.
Then in a new python script I am loading in the trained network using:
PATH = "models/ANNKDD.pt" network = Network() network.load_state_dict(torch.load(PATH)) network = network.eval() with torch.no_grad(): predictions = network(X)
Tested on both pytorch 1.4.0 and 1.6.0
Any insight into this problem would be much appriciated, thanks!