Torch.compile, Triton cuda capability

Hello everyone,
Since i am working with PINN, the new capabilities of pytorch2.0 is very interesting.
On making the transition to pytorch 2.0, the torch.compile seemed to work fine but on calling the model i get the following error,

“torch._dynamo.exc.BackendCompilerFailed: debug_wrapper raised RuntimeError: Found NVIDIA GeForce GTX 1080 Ti which is too old to be supported by the triton GPU compiler, which is used as the backend. Triton only supports devices of CUDA Capability >= 7.0, but your device is of CUDA capability 6.1”

Is my GPU too old for torch2.0? I have cuda 12.1 installed and 11.8 in the conda environment.
Regards

Yes unfortunately your GPU is too old, Triton itself really needs newer GPUs to shine so I believe they added this error so users of older GPUs wouldn’t get slowdowns on their models which would be even more frustrating