Hi guys, I am a PyTorch beginner trying to get my model to train on a specific GPU on my machine. I am giving as an input the following code:
torch.cuda.device_count()
cuda0 = torch.cuda.set_device(0)
torch.cuda.current_device(). # output: 0
torch.cuda.get_device_name(0)
The output for the last command is ‘Tesla K40c’, which is the GPU I want to use. The problem is that my the training time for an epoch keeps being exactly the same as the training time on an outdated GPU1 Quadro 2000 (cuda 2.1)…
The question is: how do I set the device, define the variables and the model correctly (e.g. .cuda(0)??) so that my training runs on the right GPU? Is it possible to do it without passing through bash?