I got different results from different GPUs

I built a deep-learning model using Pytorch for a specific task and got a good result(68.7%) on the RTX 3090 GPU. Similarly, I built a model on GeForce RTX 2080 without changing the source code and dataset, but I got a bad result(5.4%). I’m new to GPU. I want to know why the result differs and How can I solve it?
thanks
** NVIDIA GeForce RTX 3090:**
Driver Version: 470.86 and CUDA Version: 11.4
nvcc --version result on GeForce RTX 3090 :
Cuda compilation tools, release 11.2, V11.2.67
Build cuda_11.2.r11.2/compiler.29373293_0
torch version:1.7.1

and NVIDIA GeForce RTX 2080:
Driver Version: 515.57 and CUDA Version: 11.7
nvcc --version result on GeForce RTX 2080 :
Cuda compilation tools, release 11.6, V11.6.124
Build cuda_11.6.r11.6/compiler.31057947_0
torch version:1.7.1+cu110

Check your input data, the model, and everything else in your script as we’ve seen similar questions in the past, which were caused e.g. by a different folder structure of the dataset and never by different GPUs.

The problem is solved. Some dataset folders were empty in the GeForce RTX 2080-based remote server. that was the problem. thank you for your useful suggestion and response.