I want to use multiple gpus for training. But as I have to do this training job on a cloud server, I cannot know the ids of gpu. However, the tutorial I have seen all need gpu ids to specify the usage. https://pytorch.org/tutorials/beginner/former_torchies/parallelism_tutorial.html and https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html#create-model-and-dataparallel
Therefore, I want to know if there is a method for me use all available devices?