Unity ML-Agents with older GPU(CC 3.0)

I am trying to use an ‘older’ gpu, gtx 660, to do some Machine Learning in Unity(ML-Agents).
I just learned that the compute capability of my GPU is not supported by pytorch anymore.

Is there any way I can get a work around for this or is my GPU simply to old and if so what is a safe compute capability to get so that it wont be unsupported in a few months time after purchase?

The binaries are not shipping with this compute capability anymore, but you might try to build PyTorch from source using CUDA<=10 as described here.

I have gone through that tutorial but am lost at the part of installation for windows using Cuda.
At that point is mentions you must build using NVTX(Nsight Compute) and all the correction versioning but does not explicitly explain how to “build”. I don’t think I am advanced enough to follow that tutorial, do you know of any other build from source walkthroughs on windows that will baby me through it more?

I am just going to return my GPU since I just got it. Is it risky to get a GPU with cc 3.5 as it may not continue to be supported? I am only hoping that the GPU I get is supported for the next 8 months ish.

I don’t know the exact roadmap for the binaries and wheels and thus cannot comment on future support. Also, note that I can’t comment which GPU you should buy (as I’m working at NVIDIA) and would thus recommend to check public benchmarks etc.