@ptrblck is there a way to avoid having pytorch install the CUDA runtime if I have everything installed on the system already, but still use pre-compiled binaries?
The sizes involved here are a bit insane to me: 1GB for pytorch conda package, almost 1GB for cuda conda package, and ~2GB for pytorch pip wheels.
Why do you force the CUDA package requirement on the CUDA-enabled pytorch conda package ?
I’d like to use pytorch in CI, but given the sizes involved here (and time for compilation from source), I’m not sure I want to use pytorch at all anymore