Training converges to worse minima when executing on Ubuntu compared to Windows

I have a problem concerning the use of Ubuntu vs. Windows for training of a model in pytorch.
For a project I switched to Ubuntu, because multiprocessing works better there and I found out that executing the exact same code results in a convergence to a much worse minima in Ubuntu compared to Windows. This persists even when fixing torch and numpy seeds and when setting cudnn to deterministic and when making sure that the exact same python and other packages versions are used. Can anyone give me a hint what could be the reason for this very weird behaviour?