Problem of running on the second GPU on Ubuntu

My first GPU is good to go, but every time I ran program on the second GPU, the following error comes out.
RuntimeError: cublas runtime error : resource allocation failed at /opt/conda/conda-bld/pytorch_1524586445097/work/aten/src/THC/THCGeneral.cpp:411

Any idea on this kind of issue? I can see both GPUs in terminal by typing nvidia-smi.

2 Likes

I am facing a very similar problem; my code runs on the first GPU without any errors. However, when I use the second GPU device = torch.device("cuda:1") I get this error message: cuda runtime error (77) : an illegal memory access was encountered at c:\programdata\miniconda3\conda-bld\pytorch_1524549877902\work\aten\src\thc\generic/THCTensorCopy.c:20.
I am running my code on Windows 10, with Pytorch 0.4, cuda. Appreciate any help.

Update:
According to Runtime error occurs when using .cuda(1) there is a workaround for this issue.
Before the code block that uses the second GPU, add something like with torch.cuda.device(1).