actually it applies to using the functional version data_parallel
.
I’ve identified the issue and fixed it in https://github.com/pytorch/pytorch/pull/1187
It should be in our next release.
For now, you can do:
device_ids = list(range(torch.cuda.device_count()))