Measures to make algorithm's result reproducible

I want to make the results of my results to be reproducible between successive independent runs. I have followed the measures in the link Reproducibility — PyTorch 1.8.1 documentation
and used

import numpy as np
inport random
import torch

random.seed(0)
torch.manual_seed(0)
np.random.seed(0)

However I couldnot use,

torch.use_deterministic_algorithms(True)

as all the model blocks does not satisfy deterministic algorithm.

I just wanted to know is there any other step to reduce the dependency on the randomness involved in any general deep laerning algorithm’s implementation?

The linked documentation mentions all necessary steps to make the code reproducible.
If your model is using a non-deterministic method and thus raises an error after setting torch.use_deterministic_algorithms(True), you could either try to implement a (slow, but) deterministic approach or try to use the CPU implementation, if absolute necessary. Note that especially the second approach would synchronize the code due to the data transfer and could slow down your code significantly and I would only recommend to use it during debugging.

1 Like