Model performance in papers

Hello. I have been researching deep learning super-resolution networks.
The problem is that I never get the same results shown on published papers when I create the exact same network with the exact same settings using PyTorch.
I get 0.3dB~0.5dB lower performance which is a big deal in super-resolution.
Is there something these guys do to the model to improve results?
Do they use a lower-level language?
How do they tweak their performance using the presented datasets?

Reproducing results from a paper is often not trivial especially if the authors were not adding a lot of training details. In the best case the script with the used library versions would be published. If that’s not the case you would most likely need to play around with the hyperparameters or contact the authors for more information.