I’m wondering how I’d be able to retrieve a list of supported compute capabilities of a certain compiled version of libtorch? I basically need to investigate the fat binary produced, to see that it contains the correct support for different compute capabilities/GPUs (including the PTX version for future compatibility). How can I do this? The version of libtorch has not been compiled with python support, it’s only compiled for the static C++ libtorch linking, so it needs to work from C++ please. Or if I can investigate the static library “libtorch.a” or “libtorch_cuda.a” in some way?