Environment Issue

I have a model that works well on my old machine but keeps getting the following error:
torch.backends.cudnn.CuDNNError: 8: b’CUDNN_STATUS_EXECUTION_FAILED’

The following are configurations of each machine respectively:
old machine:
Ubuntu 16.04
GPU: 1080ti
Nvidia Driver Version:396.54

torch.version
‘0.3.1’

torch.version.cuda
‘9.1.85’

torch.backends.cudnn.version()
7005

new machine:
Ubuntu 18.04
GPU: 2080ti
Nvidia Driver Version:410.78

torch.version
‘0.3.1’

torch.version.cuda
‘9.1.85’

torch.backends.cudnn.version()
7005

for the new machine, I installed
cuda 10.0.130 + cudnn-v7.4.2 for cuda 10
cuda 9.1.85+cudnn-v7.0.5 for cuda 9.1
both of them have been included in PATH and LD_LIBRARY_PATH
edillower@edillower-ubuntu:~$ echo $PATH
/home/edillower/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/cuda-10.0/bin:/usr/local/cuda-9.1/bin
edillower@edillower-ubuntu:~$ echo $LD_LIBRARY_PATH
:/usr/local/cuda-10.0/lib64:/usr/local/cuda-9.1/lib64

I have another completely different program which uses pytorch 0.3.1 got the same error.
I also tried using pytorch 1.0 for these two programs but the same error still appears.
I have no idea what’s wrong with the new machine’s environment. Let me know if any further information is needed, and I appreciate if you can provide any thoughts or possible fixes.

Here’s the entire error message:
build word sequence feature extractor: LSTM …
Traceback (most recent call last):
File “main.py”, line 462, in
decode_results = load_model_decode(data, ‘raw’)
File “main.py”, line 391, in load_model_decode
model = SeqModel(data)
File “/home/edillower/Documents/parser/ParserPDTB/model/seqmodel.py”, line 34, in init
self.word_hidden = WordSequence(data)
File “/home/edillower/Documents/parser/ParserPDTB/model/wordsequence.py”, line 69, in init
self.lstm = self.lstm.cuda()
File “/home/edillower/pdtb/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 216, in cuda
return self._apply(lambda t: t.cuda(device))
File “/home/edillower/pdtb/lib/python3.6/site-packages/torch/nn/modules/rnn.py”, line 123, in _apply
self.flatten_parameters()
File “/home/edillower/pdtb/lib/python3.6/site-packages/torch/nn/modules/rnn.py”, line 102, in flatten_parameters
fn.rnn_desc = rnn.init_rnn_descriptor(fn, handle)
File “/home/edillower/pdtb/lib/python3.6/site-packages/torch/backends/cudnn/rnn.py”, line 42, in init_rnn_descriptor
cudnn.DropoutDescriptor(handle, dropout_p, fn.dropout_seed)
File “/home/edillower/pdtb/lib/python3.6/site-packages/torch/backends/cudnn/init.py”, line 207, in init
self._set(dropout, seed)
File “/home/edillower/pdtb/lib/python3.6/site-packages/torch/backends/cudnn/init.py”, line 232, in _set
ctypes.c_ulonglong(seed),
File “/home/edillower/pdtb/lib/python3.6/site-packages/torch/backends/cudnn/init.py”, line 283, in check_error
raise CuDNNError(status)
torch.backends.cudnn.CuDNNError: 8: b’CUDNN_STATUS_EXECUTION_FAILED’