Cannot send models to gpu under MIG on A100

Right now there are three models, convnext_base, vgg16, and deit_base.
The first two models are from Pytorch.pretrained, and the last one is from HuggingFace.
The only change I made is I changed the output feature number of the last layer of VGG16 from 1000 to 10, since I am testing it on CIFAR-10.

With MIG on, and pass CUDA_VISIBLE_DEVICES=$UUID to the program, I can successfully send convnext_base and deit_base to the device. However, VGG16 failed, with error returned that

RuntimeError: Attempting to deserialize object on CUDA device 0 but torch.cuda.device_count() is 0. Please use torch.load with map_location to map your storages to an existing device.

Best
Max

Could you post the commands you’ve used to setup MIG on your A100, so that I could try to reproduce it, please?
Also, could you post the output of python -m torch.utils.collect_env?

Hi,
I am joining in on this old post. I am trying to do DDP training with a Huggingface GPT model and want to deploy it on our cluster. The nodes we have privilaged access to are MIGs.

 python -m torch.utils.collect_env
/bigwork/nhwpruht/.conda/envs/cleanrlcu118/lib/python3.10/runpy.py:126: RuntimeWarning: 'torch.utils.collect_env' found in sys.modules after import of package 'torch.utils', but prior to execution of 'torch.utils.collect_env'; this may result in unpredictable behaviour
  warn(RuntimeWarning(msg))
Collecting environment information...
PyTorch version: 2.3.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A

OS: CentOS Linux release 7.6.1810 (Core)  (x86_64)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.17

Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.2.1.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: 
GPU 0: NVIDIA A100 80GB PCIe
  MIG 2g.20gb     Device  0:
  MIG 1g.20gb     Device  1:
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A100 80GB PCIe
GPU 3: NVIDIA A100 80GB PCIe
GPU 4: NVIDIA A100 80GB PCIe
GPU 5: NVIDIA A100 80GB PCIe
GPU 6: NVIDIA A100 80GB PCIe
GPU 7: NVIDIA A100 80GB PCIe

Nvidia driver version: 545.23.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                256
On-line CPU(s) list:   0-255
Thread(s) per core:    2
Core(s) per socket:    64
Socket(s):             2
NUMA node(s):          2
Vendor ID:             AuthenticAMD
CPU family:            25
Model:                 1
Model name:            AMD EPYC 7713 64-Core Processor
Stepping:              1
CPU MHz:               2000.000
CPU max MHz:           2000.0000
CPU min MHz:           1500.0000
BogoMIPS:              4000.26
Virtualization:        AMD-V
L1d cache:             32K
L1i cache:             32K
L2 cache:              512K
L3 cache:              32768K
NUMA node0 CPU(s):     0-63,128-191
NUMA node1 CPU(s):     64-127,192-255
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl nonstop_tsc extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb cat_l3 cdp_l3 invpcid_single hw_pstate sme retpoline_amd ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq overflow_recov succor smca

Versions of relevant libraries:
[pip3] mypy==1.11.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] torch==2.3.1+cu118
[pip3] torchaudio==2.3.1+cu118
[pip3] torchvision==0.18.1+cu118
[pip3] triton==2.3.1
[conda] numpy                     1.24.4                   pypi_0    pypi
[conda] torch                     2.3.1+cu118              pypi_0    pypi
[conda] torchaudio                2.3.1+cu118              pypi_0    pypi
[conda] torchvision               0.18.1+cu118             pypi_0    pypi
[conda] triton                    2.3.1                    pypi_0    pypi

I collected the UUIDs using

CUDA_VISIBLE_DEVICES=$(nvidia-smi -L|awk '/MIG/ {gsub(/\)/, "", $6); print $6}')

You won’t be able to run a DDP job using MIG since each MIG slice is isolated and no communication between them is supported.

1 Like

Thanks for the quick and insightful reply.