PyTorch's DataParallel is only using one GPU

Hi. This question seems to have been asked a lot but I’m still facing some trouble. I’m trying to use two GPU’s using torch.nn.DataParallel but when I wrap my model nvidia-smi says I’m only using one.

The code I have looks something like:

import torch.nn as nn

model = SomeModel()

if args.multiple_gpu: # Boolean
    os.environ['CUDA_VISIBLE_DEVICES'] = '0,1'
    model = nn.DataParallel(model)

model ='cuda')

nvidia-smi says I’m only using GPU 0 and not 1. I feel like I’m overlooking something in plain sight but I can’t grasp what. Any opinions are appreciated, thanks.

You should specify device ids.

model = nn.DataParallel(model, device_ids=[0, 1])

I’ve tried that as well, and by default device_ids is set as all available devices.

I tried the below snippet which is similar to yours but it works with multi-gpu on my device.

import torch.nn as nn

model = nn.Conv2d(3,3,1,1)
model = nn.DataParallel(model).cuda()

input = torch.randn(64,3,8,8).cuda()
output = model(input)

I’m not sure what the difference between mine and yours could be.