For context, I am trying to get whisperX running locally and followed this guide, installing it in Anaconda (https://www.youtube.com/watch?v=zIvcu8szpxw)
for a little more context…. my coding is FULLY reliant on instructions (I’m trying to transcribe DnD sessions I am running)
so….
right now when I try to run the whisperX python code (GitHub - m-bain/whisperX: WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)) I am getting the following message before the Kernel dies.
C:\Users\XXX.conda\envs\whisperx\lib\site-packages\torch\cuda_init_.py:283: UserWarning:
Found GPU1 NVIDIA GeForce GTX 1070 which is of cuda capability 6.1.
Minimum and Maximum cuda capability supported by this version of PyTorch is
(7.0) - (12.0)(followed by some related warnings telling me where to download the correct version, etc)
Besides this 1070 which I am using for it’s additional VRam when playing with LLMs, I have an RTX 3070 installed that does support the correct CUDA version
running nvidia-smi will list the RTX 3070 as GPU-0 and the 1070 as GPU-1 (which is the way it’s supposed to be)
and lists the current cuda version as 13.0 (trying to install v12.6 as per the torch website gives the response that the requirements are already fulfilled)
I can change the python code to
device = “cpu”
compute_type = "int8
and it will run correctly
how can I make sure that Torch only tries to run off GPU-0, aka the 3070 while ignoring the 1070 and it’s insufficient CUDA version
no combination of gpu, gpu_0, GPU_0, etc I try to throw into the device = ““ will work….. it will just return
unsupported device [choosen name]
I hope i was able to provide enough information about the problem and what I’m trying to do, but I’m open to answer any questions… within my limited capacity