As far as I understand, if you use
torch.backends.cudnn.deterministic=True and with it
torch.backends.cudnn.benchmark = False in your code (along with settings seed), it should cause your code to run deterministically.
However, for reasons I don’t understand, if I remove the two lines it will always result in worse results. Even setting deterministric for CUDNN and other places, I still don’t get fully identical results, but removing it causes my loss to not go any lower (top lines in attached image). What am I doing wrong?
- Pytorch Lighting 1.7
- Using DDP accelerator
- Settings seeds for random, np, torch (
- Have the below code for
def loader_init_fn(seed): random.seed(seed) np.random.seed(seed) torch.manual_seed(seed)