Did you get to know the answer for this from somewhere? @JeremySMorgan
No I didn’t. I ended up just adding torch.manual_see(0) to all my files with torch imported. I guess it would be pretty easy to check, but i’m lazy
Hi @smth! Just a final clarification question, hopefully others will benefit too.
So torch.manual_seed(seed)
should fix both the GPU/CPU PyTorch seed so the call to torch.cuda.manual_seed
is redundant. If I use multiple devices, either in data prallel mode or the new sharding feature, do I still have to call torch.cuda.manual_seed_all
or is this still implicitly done by torch.manual_seed
call?
Many thanks