Can I find old docs (torch.utils.trainer.plugins)

I’m trying to update some old code using pytorch 0.4, and it usestorch.utils.trainer.plugins Logger. It’s not supported by more recent versions of pytorch, and therefore I need to get rid of it. Is there any old docs showing what its purpose is and how can I replace it ?

Unfortunately it looks like trainer was an undocumented portion of PyTorch that has been removed (see this forum post and this GitHub comment). The same forum post recommends TNT and also points to an older forum post.

However, all of this is from 2017. I’m not entirely sure what Logger did, but you could check out's codebase, which allows for metrics and callbacks. I hope that helps a bit!

Thank you for your reply !
Maybe you can help me out with the other ones as well ? They’re called Monitor, LossMonitor and there is also something about checkpoints. How are checkpoints handled on more recent versions of pytorch ?

1 Like

Of course! So all of torch.utils.trainer was an undocumented and since-removed portion of PyTorch. As such, Monitor and LossMonitor are in the same category as Logger, to the best of my knowledge. A lot of people have worked on tools to monitor training in PyTorch:

  • there are ways to hook up Tensorboard to PyTorch
  • or use TensorboardX
  • the aforementioned metrics and callbacks (which I only mention again because it is what I use, but not necessarily best)
  • a Facebook tool called Visdom
  • and even commercial products like Weights and Biases (which seems pretty cool, I’ll have to check it out!)

There are probably a ton more options out there, and it’ll depend on what your use case is, how in-depth you need to go, etc.

As for checkpointing, I’m not sure if you’re talking about torch.utils.checkpoint, a tool that has existed (and still exists!) in PyTorch to trade more computing time for less memory usage, or simply saving the model, which is commonly called checkpointing both in other systems (like Tensorflow) and in codebases (like my own…).

If you’re talking about the former, I’ve never used it but a quick glance makes it look like checkpointing is mostly the same since 0.4.0.

If you’re talking about the latter, that is done via, which “checkpoints” your model by saving the model weights to a pickled file.

Hope that helps!

Thank you so much for the detailed answer ! This is going to be very useful, I’m pretty sure I was thinking of “checkpointing”. The code I have runs with torch 0.4.0, but it was created in an earlier version, and therefore I can only guess that they had to create their own checkpointing method. seems a lot less trouble !

Once again, thank you for being kind and helpful with a newcomer, it makes you want to stay with the community.

1 Like

Glad I could help! I’ve been using PyTorch for a while now but just recently got into the community, it’s a pretty welcoming place – glad I could pass it on! Good luck with your version updating project, I must say I don’t envy you :sweat_smile: