Cuda Run Time Error

The code was working perfectly fine until 2 days back. Got this abrupt error when I run it with cuda device. However, it works totally fine without cuda device,

`/pyenv/py3.6.3/lib/python3.6/site-packages/torch/nn/modules/module.py:491: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
result = self.forward(*input, **kwargs)

THCudaCheck FAIL file=/pytorch/aten/src/THC/generic/THCStorage.cu line=58 error=2 : out of memory
Traceback (most recent call last):
File “generate.py”, line 78, in
output, hidden = model(input, hidden)
File “pyenv/py3.6.3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 491, in call
result = self.forward(*input, **kwargs)
File “model.py”, line 81, in forward
raw_output, new_h = rnn(raw_output, hidden[l])
File “pyenv/py3.6.3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 491, in call
result = self.forward(*input, **kwargs)
File “weight_drop.py”, line 47, in forward
return self.module.forward(*args)
File “pyenv/py3.6.3/lib/python3.6/site-packages/torch/nn/modules/rnn.py”, line 192, in forward
output, hidden = func(input, self.all_weights, hx, batch_sizes)
File “pyenv/py3.6.3/lib/python3.6/site-packages/torch/nn/_functions/rnn.py”, line 323, in forward
return func(input, *fargs, **fkwargs)
File “pyenv/py3.6.3/lib/python3.6/site-packages/torch/nn/_functions/rnn.py”, line 287, in forward
dropout_ts)
RuntimeError: cuda runtime error (2) : out of memory at /pytorch/aten/src/THC/generic/THCStorage.cu:58`

Could anyone please look into it?

How much GPU memory you have? Try reducing Batch size?