I am currently doing a classification task and I noticed that if I load my model on GPU my test accuracy is around 0.7982.
When I load that same model on a cpu,the accuracy comes out to be 0.7894.
Although the difference is about 8.8e-4,I would still like to know as to why this difference is coming.
I had also included some code related to setting manual seed.As shown below.Does this have any impact on the changing accuracy?
random_seed=42 torch.manual_seed(random_seed) if torch.cuda.is_available(): #set these parameters to ensure that model accuracy and loss remains same #even when the same code is run multiple times. torch.cuda.manual_seed(random_seed) torch.backends.cudnn.deterministic=True torch.backends.cudnn.benchmark=False np.random.seed(random_seed)
I don’t think it should since those statements will only be executed if cuda device is available.
Note on colab pytorch version is torch==1.12.1+cu113
But on my own system(CPU ) pytorch is 1.8.1