I recently know inside torch.manual_seed(seed)
, it calls torch.cuda.manual_seed_all(seed)
, which sets the seed for generating random numbers on all GPUs.
This brings me a question.
If I launch program A (use torch.manual_seed(3)
) on cuda:0 and then program B (use torch.manual_seed(5)
) on cuda:1, (Both are on the same machine), will program A end up in using seed 3 or 5 ?