How to get determistic behavior with different GPUs?

I have 3 different GPUs available and do not experience deterministic behavior. My GPUs are:

GeForce GTX 1080 Ti
GeForce RTX 2080 Ti
Tesla P100-PCIE-16GB

Im using PyTorch version 1.3.1 and currently using the following settings to try to achieve the same reproducible result:

seed = 3
torch.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
torch.cuda.manual_seed_all(seed)
random.seed(seed)
np.random.seed(seed)

I run resnet32 and running 3 different experiments for 2 epochs and predicting with the trained model. I then get the following results:

GeForce GTX 1080 Ti : 0.33333
GeForce RTX 2080 Ti : 0.20000
Tesla P100-PCIE-16GB : 0.33333

I dont really know why I achieve the same results on the GTX 1080 and Tesla P100 and not with the RTX 2080?

Is there something I am missing?

Determinism cannot be guaranteed between different hardware families, so you cannot expect to get bitwise accurate results between all three devices.
You could update to the latest PyTorch release (1.9.0) and set all settings as described in the Reproducibility docs to get deterministic behavior between different runs on the same device.