GPU not detected by torch

import torch

dtype = torch.float
device = torch.device("cpu")
#dtype = torch.device("cuda:0") # Uncomment this to run on GPU

# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10

# Create random input and output data
x = torch.randn(N, D_in, device=dtype, dtype=torch.float)
y = torch.randn(N, D_out, device=dtype, dtype=torch.float)

# Randomly initialize weights
w1 = torch.randn(D_in, H, device=dtype, dtype=torch.float)
w2 = torch.randn(H, D_out, device=dtype, dtype=torch.float)

learning_rate = 1e-6
for t in range(100000):
    # Forward pass: compute predicted y
    h = x.mm(w1)
    h_relu = h.clamp(min=0)
    y_pred = h_relu.mm(w2)

    # Compute and print loss
    loss = (y_pred - y).pow(2).sum().item()
    print(t, loss)

    # Backprop to compute gradients of w1 and w2 with respect to loss
    grad_y_pred = 2.0 * (y_pred - y)
    grad_w2 = h_relu.t().mm(grad_y_pred)
    grad_h_relu = grad_y_pred.mm(w2.t())
    grad_h = grad_h_relu.clone()
    grad_h[h < 0] = 0
    grad_w1 = x.t().mm(grad_h)

    # Update weights using gradient descent
    w1 -= learning_rate * grad_w1
    w2 -= learning_rate * grad_w2

To check if the gpu was doing its job, I decided to compare performance with and without gpu.
It worked equally slow in both the cases. Then, I ran this:

if torch.cuda.device_count() > 1:
    print("Let's use", torch.cuda.device_count(), "GPUs!")
else:
    print("no usable gpus")

Output:

no usable gpus

I have cuda installed properly and Nvidia GeForce 930M. Why doesn’t it find any gpus??

How did you installed pytorch? Which version of CUDA do you have installed?

installed pytorch with pip on windows 10. have CUDA 9.1

I don’t know how these work on windows, @peterjc123 might.

@varunGitBoi What error does it throw if you explicitly create a new tensor on GPU?

import torch
a = torch.cuda.FloatTensor([1.])

seems to be some problem with the installation…It returns not found error…but when I go to install torch, it says all packages already exist…although I uninstalled pytorch earlier…can you tell me how to completely remove pytorch??

Either one should do. I don’t which one you used in your previous uninstallation.

conda uninstall pytorch
pip uninstall torch

It took like forever but didnt sjow any error message
then i ran:
a
it returned :

tensor([ 1.], device='cuda:0')

does cuda:0 mean that i dont have any usable GPUs??

also, this:
`model = Model(input_size, output_size)
if torch.cuda.device_count() > 1:
print(“Let’s use”, torch.cuda.device_count(), “GPUs!”)

dim = 0 [30, xxx] -> [10, …], [10, …], [10, …] on 3 GPUs

model = nn.DataParallel(model)

model.to(device)`
outputs:

  (fc): Linear(in_features=5, out_features=2, bias=True)
)

where “Model()” is:

    # Our model

    def __init__(self, input_size, output_size):
        super(Model, self).__init__()
        self.fc = nn.Linear(input_size, output_size)

    def forward(self, input):
        output = self.fc(input)
        print("\tIn Model: input size", input.size(),
              "output size", output.size())

        return output
type or paste code here

No, it means the data is on the first GPU.

all the set up done, i tried comparing the time gpu and cpu codes take to execute. Founf out the the gpu one was taking more or comparable time everytime.

Disabling the secure boot in bios helped me