Problem with saving model(RuntimeError: std::bad_alloc)

Hi, when I tried to save my model using, 'path') I got this error:

Traceback (most recent call last):
  File "", line 89, in <module>, '/home/vladislavprh/decoder.pth')
  File "/home/vladislavprh/anaconda3/lib/python3.6/site-packages/torch/", line 120, in save
    return _save(obj, f, pickle_module, pickle_protocol)
  File "/home/vladislavprh/anaconda3/lib/python3.6/site-packages/torch/", line 192, in _save
RuntimeError: std::bad_alloc

I train it on 8 GPUs and every layer in the model refer to the different GPU, by layer I mean: nn.LSTM, nn.Linear etc.
How can I solve it?


i wonder if this is related to saving a large checkpoint. We fixed some bugs wrt very large checkpoints and OSX in v0.1.11. Are you on atleast pytorch v0.1.11 ? Also, do you have enough diskspace to write checkpoints?

Hi, smth.
Thank you for your reply, the problem was in RAM memory, so I increased memory and now it works