Hi. This question seems to have been asked a lot but I’m still facing some trouble. I’m trying to use two GPU’s using torch.nn.DataParallel
but when I wrap my model nvidia-smi
says I’m only using one.
The code I have looks something like:
import torch.nn as nn
model = SomeModel()
if args.multiple_gpu: # Boolean
os.environ['CUDA_VISIBLE_DEVICES'] = '0,1'
model = nn.DataParallel(model)
model = model.to('cuda')
nvidia-smi
says I’m only using GPU 0 and not 1. I feel like I’m overlooking something in plain sight but I can’t grasp what. Any opinions are appreciated, thanks.