numb_gpu= list with GPU indexes
# if numb_gpu> 1:
print("Let's use", numb_gpu, "GPUs!")
# os.environ["CUDA_VISIBLE_DEVICES"] = ','.join(str(x) for x in numb_gpu)
is there a method to get the gpu index instead of the total number of gpu?
Do you mean to get the GPU index of the currently used device? In this case you are looking for torch.cuda.current_device().
If you want all the GPU indexes you can do something like range(numb_gpu) as NVIDIA GPUs are numbered from 0 to N-1 (with N GPUs).
As an additiona note I don’t understand your code, because numb_gpu is an int but you are iterating on it with a for loop, so this should throw an error.