I am using Python 1.7
In [1]: import torch
...: from torchvision.models import vgg19
...:
...: device = torch.device("cuda:0")
In [2]:
In [2]: memory = torch.cuda.memory_allocated(device)
Segmentation fault (core dumped)
And my GPU info:
~# nvidia-smi
Fri May 21 13:13:27 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.87.01 Driver Version: 418.87.01 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000000:00:10.0 Off | 0 |
| N/A 33C P0 56W / 300W | 15323MiB / 16130MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla V100-SXM2... On | 00000000:00:11.0 Off | 0 |
| N/A 33C P0 57W / 300W | 462MiB / 16130MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 Tesla V100-SXM2... On | 00000000:00:12.0 Off | 0 |
| N/A 39C P0 56W / 300W | 464MiB / 16130MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 3 Tesla V100-SXM2... On | 00000000:00:13.0 Off | 0 |
| N/A 55C P0 283W / 300W | 15965MiB / 16130MiB | 99% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
However, if I check on terminal, it has no segmentation fault:
# python -c 'import torch; print(torch.cuda.memory_allocated(torch.cuda.device("cuda:0")))'
0
Also, the GPU is used fully, but why this check also returns 0?
# python -c 'import torch; print(torch.cuda.memory_allocated(torch.cuda.device("cuda:3")))'
0