Problem with saving model(RuntimeError: std::bad_alloc)

Hi, when I tried to save my model using torch.save(decoder.state_dict(), 'path') I got this error:

Traceback (most recent call last):
  File "train.py", line 89, in <module>
    torch.save(decoder.state_dict(), '/home/vladislavprh/decoder.pth')
  File "/home/vladislavprh/anaconda3/lib/python3.6/site-packages/torch/serialization.py", line 120, in save
    return _save(obj, f, pickle_module, pickle_protocol)
  File "/home/vladislavprh/anaconda3/lib/python3.6/site-packages/torch/serialization.py", line 192, in _save
    serialized_storages[key]._write_file(f)
RuntimeError: std::bad_alloc

I train it on 8 GPUs and every layer in the model refer to the different GPU, by layer I mean: nn.LSTM, nn.Linear etc.
How can I solve it?

Thanks!

i wonder if this is related to saving a large checkpoint. We fixed some bugs wrt very large checkpoints and OSX in v0.1.11. Are you on atleast pytorch v0.1.11 ? Also, do you have enough diskspace to write checkpoints?

Hi, smth.
Thank you for your reply, the problem was in RAM memory, so I increased memory and now it works