LSTM dropout causes warning

When using LSTM with dropout, below warning is printed:

[W CUDAGuardImpl.h:46] Warning: CUDA warning: driver shutting down (function uncheckedGetDevice)
[W CUDAGuardImpl.h:62] Warning: CUDA warning: driver shutting down (function uncheckedSetDevice)

minimum code:

import torch
from torch import nn

class Net(nn.Module):
    def __init__(self):
        self.embedding = nn.Embedding(1000, 8)
        self.lstm = nn.LSTM(8, 8, 2,
                            dropout=0.2, batch_first=True)

    def forward(self, x):
        x = self.embedding(x)
        x = self.lstm(x)
        return x

inp = torch.randint(0, 1000, (32, 512))
inp ='cuda')
model = Net().to('cuda')

when dropout is not used, no warning will be printed.
Windows11 (doesn’t even happen on WSL on the same machine)
Python 3.10
torch version: 1.13.1+cu117 and 1.13.1+cu116 (both warns)

The above warning does not seem to affect anything (code runs, exits normally, and speed is okay).
I searched online, and it appears that the problem is caused by misusing multiprocessing, but clearly, I am not using multiprocessing, so I guess it is possible that something in LSTM uses multiprocessing. Why is this warning? Are there any impacts on my program? How do I solve this, or is this a bug?

I cannot reproduce this issue on Linux with 1.13.1+cu117 and the current nightly release, so it could be Windows-specific.

Yes, it is very likely windows specific, because even WSL (Windows Subsystem for Linux) cannot reproduce the problem. I just wonder what does this warning mean? How can I potentially fix it? Thanks in advance!

I would probably add the if-clause guard as described here as it’s often needed on Windows.

Thanks. But I tried and that does not work. In fact I do have if main in my actual program, just forgot to include it in the sample code.