When I try to use GPU, I got the error as follow:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-6-52891be9f0e3> in <module>
----> 1 torch.ones((3,4)).to("cuda")
e:\miniconda3\lib\site-packages\torch\cuda\__init__.py in _lazy_init()
161 "Cannot re-initialize CUDA in forked subprocess. " + msg)
162 _check_driver()
--> 163 torch._C._cuda_init()
164 _cudart = _load_cudart()
165 _cudart.cudaGetErrorName.restype = ctypes.c_char_p
RuntimeError: CUDA error: unknown error
This is my environment info.
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 10.0
OS: Microsoft Windows 10 Pro
GCC version: Could not collect
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration: GPU 0: GeForce GTX 960M
Nvidia driver version: 430.39
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\bin\cudnn64_7.dll
Versions of relevant libraries:
[pip3] numpy==1.16.3
[conda] blas 1.0 mkl
[conda] mkl 2019.3 203
[conda] mkl_fft 1.0.12 py37h14836fe_0
[conda] mkl_random 1.0.2 py37h343c172_0
[conda] pytorch 1.1.0 py3.7_cuda100_cudnn7_1 pytorch
[conda] torchvision 0.2.2 py_3 pytorch
How to fix this error