GPU with PyTorch

I’m using Tesla C2050 GPU which has compute capability of 2.0. I’m using ubuntu 14.04. I have installed CUDA 7.5 driver for the GPU. In PyTorch, i have defined two tensors on GPU as follows.
a = torch.ones(5)
b = torch.ones(5)
x = a.cuda()
y = b.cuda()

But i’m unable to add these two tensors (as z = a+b). It’s giving me some runtime error as follows. Is this because the GPU is too old?

RuntimeError Traceback (most recent call last)
in ()
----> 1 z = x+y

/home/user/anaconda3/lib/python3.5/site-packages/torch/tensor.py in add(self, other)
265 # TODO: add tests for operators
266 def add(self, other):
–> 267 return self.add(other)
268 radd = add
269

RuntimeError: cuda runtime error (8) : invalid device function at /py/conda-bld/pytorch_1493670682084/work/torch/lib/THC/generated/…/generic/THCTensorMathPointwise.cu:246

It’s a bit late but, yes, I think it’s due to a too old GPU :

“[torch] has a CUDA counterpart, that enables you to run your tensor computations on an NVIDIA GPU with compute capability >= 3.0.” (http://pytorch.org/docs/master/torch.html)

1 Like

Thanks. Currently I’m using NVIDIA K80 GPU for my research work and it is working fine.

Thanks.