I am getting the following error:
import torch
torch.randn(3,3).to(““cuda”)
File “”, line 1
torch.randn(3,3).to(”“cuda”)
^
SyntaxError: unterminated string literal (detected at line 1)
torch.randn(3,3).to(“cuda”)
Traceback (most recent call last):
File “”, line 1, in
RuntimeError: CUDA error: CUDA-capable device(s) is/are busy or unavailable
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile withTORCH_USE_CUDA_DSA
to enable device-side assertions.
torch.cuda.is_available()
True
Even though torch.cuda.is_available() is true and no other processes are running in GPU as shown in nvidia-smi
ue Dec 17 10:07:21 2024
±----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.127.05 Driver Version: 550.127.05 CUDA Version: 12.4 |
|-----------------------------------------±-----------------------±---------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA A10-24Q On | 00000002:00:00.0 Off | 0 |
| N/A N/A P8 N/A / N/A | 1MiB / 24512MiB | 0% Default |
| | | N/A |
±----------------------------------------±-----------------------±---------------------+
±----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
±----------------------------------------------------------------------------------------+