When using LSTM with dropout, below warning is printed:
[W CUDAGuardImpl.h:46] Warning: CUDA warning: driver shutting down (function uncheckedGetDevice)
[W CUDAGuardImpl.h:62] Warning: CUDA warning: driver shutting down (function uncheckedSetDevice)
minimum code:
import torch
from torch import nn
class Net(nn.Module):
def __init__(self):
super().__init__()
self.embedding = nn.Embedding(1000, 8)
self.lstm = nn.LSTM(8, 8, 2,
dropout=0.2, batch_first=True)
def forward(self, x):
x = self.embedding(x)
x = self.lstm(x)
return x
inp = torch.randint(0, 1000, (32, 512))
inp = inp.to('cuda')
model = Net().to('cuda')
model(inp)
when dropout is not used, no warning will be printed.
version/platform:
Windows11 (doesn’t even happen on WSL on the same machine)
Python 3.10
torch version: 1.13.1+cu117 and 1.13.1+cu116 (both warns)
The above warning does not seem to affect anything (code runs, exits normally, and speed is okay).
I searched online, and it appears that the problem is caused by misusing multiprocessing, but clearly, I am not using multiprocessing, so I guess it is possible that something in LSTM uses multiprocessing. Why is this warning? Are there any impacts on my program? How do I solve this, or is this a bug?