Best practice for running legacy pytorch code on newer GPU?

Hi I’ve come into this problem quite frequently and it is rather annoying. I’ll be trying to inspect a paper’s results that is like 4 years old, and rather than working I will need to update their whole code-base to work with breaking pytorch interface changes so that I can use a modern version of pytorch that works with the new GPU.

Is there a better solution to this problem than rewriting the codebase(s) to match modern pytorch?
P.S. A perfect solution in tensorflow is `import tensorflow.v2 as tf’. Idk what to do in pytorch.