How to use multigpu pytorch?

How to use multiple GPU on pytorch, any concrete example such as cifat100 or mnist , just to understand it more.
Thaks in advance

1 Like

The documentation presents you a detailed tutorial on how it can be done.

You could use torch.distributed as well, which is useful if your GPUs are not located in a single machine.

Here is a very simple snippet for you to get a grasp on how it could be done. I’m using torch.distributed.launch here below, you should save this snippet as a python module (say torch_dist_tuto.py) then run python -m torch.distributed.launch --nproc_per_node=4 torch_dist_tuto.py

import torch
import argparse
import os
import torch.distributed as dist

parser = argparse.ArgumentParser()
parser.add_argument("--local_rank", type=int)
args = parser.parse_args()

rank = args.local_rank
size = int(os.environ["WORLD_SIZE"]) if "WORLD_SIZE" in os.environ else 1

dist.init_process_group("gloo", rank=rank, world_size=size)

x = torch.arange(10)[rank::size].sum()

print("Process rank {}, partial result {}".format(rank, x))

dist.reduce(x, dst=0)
if rank == 0: print("Final result:", x)

Try running this with other values of nproc_per_node and see the magic.

2 Likes

Thanks, @LeviViana I appreciate

1 Like