huh I’m confused, is torch.save(model, …) actually wrong and should we be using torch.save(model.state_dict(), …) instead?
No, not wrong, just a different approach. I.e., via the former, the whole object gets pickled, and via the latter, only its parameters get pickled. Since pickle can be quite of a mess when it comes to import dependencies, I would generally recommended the latter approach. Esp. if you are planning to run the model on a different machine.
I’m forking the official word level language modelling. I can’t find an explicit optimizer here. All I’ve is a loss.backward(). I’m not able to reproduce the results by saving the model like explained here.
In this tutorial the weight updates are performed manually in this line of code.
Since you don’t have internal estimates you don’t have to store anything regarding the optimization.
Something looks fishy. Could you create a new thread and post your complete issue there?
It would also be easier to debug, if you could post your code so that we can have a look.
Hi! I have a problem with loading my model. I’m training VGG19 on cifar10 in colab, when I load it in colab it is OK but when I load it on my laptop with same code it gives error. They’re both python3 and trained with cuda.
Error:
Yes, if you use StringIO you can create a file stream, write your model state to it, then push that to s3.
What I additionally do is use joblib to add compression and pickle after writing to the stream, push that to s3, then unload with joblib back to a file stream object and read the model state back into a model object to resume.