Hi,
I’m working on training a light GAN on Google Colab and thus need to resume training over a couple of sessions. I can save and load the generator and discriminator state dicts A-OK, but get the following error when trying to load the adam optimizer state dicts:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-27-cbfbe7dc0130> in <module>()
4 G.cuda()
5 D.cuda()
----> 6 g_optimizer.load_state_dict(torch.load('/content/drive/MyDrive/GAN_Temp/g_optim.pth'))
7 d_optimizer.load_state_dict(torch.load('/content/drive/MyDrive/GAN_Temp/d_optim.pth'))
/usr/local/lib/python3.7/dist-packages/torch/optim/optimizer.py in load_state_dict(self, state_dict)
135 # Validate the state_dict
136 groups = self.param_groups
--> 137 saved_groups = state_dict['param_groups']
138
139 if len(groups) != len(saved_groups):
TypeError: 'method' object is not subscriptable
I would save the the state dicts like so:
torch.save(G.state_dict(), '/content/drive/MyDrive/GAN_Temp/G.pth')
torch.save(D.state_dict(), '/content/drive/MyDrive/GAN_Temp/D.pth')
torch.save(g_optimizer.state_dict, '/content/drive/MyDrive/GAN_Temp/g_optim.pth')
torch.save(d_optimizer.state_dict, '/content/drive/MyDrive/GAN_Temp/d_optim.pth')
Originally I was trying to save them at the end of every so-and-so batch iterations in my training loop, but I’ve also tried to let a full few epochs complete and run the save steps above, then reload the state dict before a new running the training loop again – same results.
I feel like I’m probably just doing something silly, but any ideas?