[SOLVED] Make Sure That Pytorch Using GPU To Compute

From the http://pytorch.org/tutorials/beginner/pytorch_with_examples.html#autograd

tutorial it seems that the way they do to make sure everything is in cuda is to have a dytype for GPUs as in:

dtype = torch.FloatTensor
# dtype = torch.cuda.FloatTensor # Uncomment this to run on GPU

and they have lines like:

# Randomly initialize weights
w1 = torch.randn(D_in, H).type(dtype)
w2 = torch.randn(H, D_out).type(dtype)

that way its seems possible to me that one can avoid the silly .cuda line everywhere in your code. Right? Im also new so Im checking with others.

8 Likes