Model accuracy is different when loaded on different hardware

I am currently doing a classification task and I noticed that if I load my model on GPU my test accuracy is around 0.7982.
When I load that same model on a cpu,the accuracy comes out to be 0.7894.
Although the difference is about 8.8e-4,I would still like to know as to why this difference is coming.
I had also included some code related to setting manual seed.As shown below.Does this have any impact on the changing accuracy?

  random_seed=42
    torch.manual_seed(random_seed)
    if torch.cuda.is_available():
        #set these parameters to ensure that model accuracy and loss remains same 
        #even when the same code is run multiple times.
        torch.cuda.manual_seed(random_seed)
        torch.backends.cudnn.deterministic=True
        torch.backends.cudnn.benchmark=False
    np.random.seed(random_seed)

I don’t think it should since those statements will only be executed if cuda device is available.
Note on colab pytorch version is torch==1.12.1+cu113
But on my own system(CPU ) pytorch is 1.8.1

Small differences in the model output are expected between different platforms due to the limited floating point precision and the usage of different algorithms. Seeding the code wouldn’t help since different devices could use a different pseudorandom number generator.

I have been using the torch 1.8.1 cuda 11.1 and the accuracy for googlenet was 72.78 but now migrating to newer version of torch 1.12.1 cuda 11.6 the accuracy is changing to 72.76.
Can you help me figuring why this difference is coming?

You could try to narrow down which image prediction(s) are different between both models (in eval() mode) and then compare the intermediate outputs of the model as well as the model output to narrow down where the difference might be coming from.