Also check the newer posts:
Environment:
Remote Linux with core version 5.8.0. I am not a super user.
Python 3.8.6
CUDA Version: 11.1
GPU is RTX 3090 with driver version 455.23.05
CPU: Intel Core i9-10900K
PyTorch version: 1.8.0+cu111
System imposed RAM quota: 4GB
System imposed number of threads: 512198
System imposed RLIMIT_NPROC value: 300
After I run the following code (immediately after I entered python3 command line, so nothing else run before):
import os
os.environ['OPENBLAS_NUM_THREADS'] = '2'
import torch
torc…
Virtualization: microsoft AWS server
Operating System: Ubuntu 18.04.6 LTS
Architecture: x86-64
nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.103.01 Driver Version: 470.103.01 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compu…
In my case, it did not found libcuda.so
, although nvidia-smi
worked fine. I just missed /usr/lib/x86_64-linux-gnu
in my LD_LIBRARY_PATH
. Adding that, then all worked fine.