Ignore specific GPU with pytorch in Anaconda

For context, I am trying to get whisperX running locally and followed this guide, installing it in Anaconda (https://www.youtube.com/watch?v=zIvcu8szpxw)

for a little more context…. my coding is FULLY reliant on instructions (I’m trying to transcribe DnD sessions I am running)

so….
right now when I try to run the whisperX python code (GitHub - m-bain/whisperX: WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)) I am getting the following message before the Kernel dies.

C:\Users\XXX.conda\envs\whisperx\lib\site-packages\torch\cuda_init_.py:283: UserWarning:
Found GPU1 NVIDIA GeForce GTX 1070 which is of cuda capability 6.1.
Minimum and Maximum cuda capability supported by this version of PyTorch is
(7.0) - (12.0)

(followed by some related warnings telling me where to download the correct version, etc)

Besides this 1070 which I am using for it’s additional VRam when playing with LLMs, I have an RTX 3070 installed that does support the correct CUDA version

running nvidia-smi will list the RTX 3070 as GPU-0 and the 1070 as GPU-1 (which is the way it’s supposed to be)

and lists the current cuda version as 13.0 (trying to install v12.6 as per the torch website gives the response that the requirements are already fulfilled)


I can change the python code to

device = “cpu”
compute_type = "int8

and it will run correctly

how can I make sure that Torch only tries to run off GPU-0, aka the 3070 while ignoring the 1070 and it’s insufficient CUDA version

no combination of gpu, gpu_0, GPU_0, etc I try to throw into the device = ““ will work….. it will just return

unsupported device [choosen name]

I hope i was able to provide enough information about the problem and what I’m trying to do, but I’m open to answer any questions… within my limited capacity

Hi Asmaron!

Run the process that runs your pytorch code under control of the
CUDA_VISIBLE_DEVICES environment variable. See, for example, this
thread:

For example, if you were to run your pytorch code under a simple python
session (on unix – I’m not sure how to modify this for windows), you would
run

CUDA_VISIBLE_DEVICES=0 python

to launch your python session.

Best.

K. Frank

I haven’t found a way to use the CUDA_VISIBLE_DEVICES=0 to set it as an environment rule for…. whatever I’m working with here (using the Anaconda Spyder environmen), and I will continue looking later

BUT

playing around with a few commands, I found that I in the console part of the environment, I can do:

torch.cuda.set_device(0) or torch.cuda.set_device(1)

with
torch.cuda.get_device_name()
depending on which device I set, giving the correct name (RTX3070 being 0) as a response

curiously though
torch.cuda.get_device_properties()
will show
pci_device_id=0, pci_domain_id=0
for both of them (amongst their correct name, uuid and other stats)

I found a solution

within the python script, I need these two lines of code, after which it will only see the 3070

import os
os.environ[“CUDA_VISIBLE_DEVICES”] = “0”