Error trying to use GPU

I tried running this snippet

import torch
print(torch.rand(3,3).cuda())

which gave me this error

---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-23-78a66e8a8408> in <module>()
      1 import torch
----> 2 print(torch.rand(3,3).cuda())

~/anaconda3/envs/social/lib/python3.5/site-packages/torch/_utils.py in _cuda(self, device, async)
     63         else:
     64             new_type = getattr(torch.cuda, self.__class__.__name__)
---> 65             return new_type(self.size()).copy_(self, async)
     66 
     67 

~/anaconda3/envs/social/lib/python3.5/site-packages/torch/cuda/__init__.py in __new__(cls, *args, **kwargs)
    270 
    271     def __new__(cls, *args, **kwargs):
--> 272         _lazy_init()
    273         # We need this method only for lazy init, so we can remove it
    274         del _CudaBase.__new__

~/anaconda3/envs/social/lib/python3.5/site-packages/torch/cuda/__init__.py in _lazy_init()
     82         raise RuntimeError(
     83             "Cannot re-initialize CUDA in forked subprocess. " + msg)
---> 84     _check_driver()
     85     torch._C._cuda_init()
     86     torch._C._cuda_sparse_init()

~/anaconda3/envs/social/lib/python3.5/site-packages/torch/cuda/__init__.py in _check_driver()
     49 def _check_driver():
     50     if not hasattr(torch._C, '_cuda_isDriverSufficient'):
---> 51         raise AssertionError("Torch not compiled with CUDA enabled")
     52     if not torch._C._cuda_isDriverSufficient():
     53         if torch._C._cuda_getDriverVersion() == 0:

AssertionError: Torch not compiled with CUDA enabled

I installed PyTorch using this command

conda install pytorch=0.1.12 cuda75 -c pytorch

This is the output of nvcc --version

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2015 NVIDIA Corporation
Built on Tue_Aug_11_14:27:32_CDT_2015
Cuda compilation tools, release 7.5, V7.5.17

This is the output of nvidia-smi

Tue Sep 18 13:19:04 2018       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.130                Driver Version: 384.130                   |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 780M    Off  | 00000000:01:00.0 N/A |                  N/A |
| N/A   49C    P8    N/A /  N/A |    294MiB /  4036MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0                    Not Supported                                       |
+-----------------------------------------------------------------------------+

Hi,

Is there a particular reason why you want to use such an old version? I am not sure at that time that the default package was compiled with cuda support.

Hello thanks for the help. My GPU only supports CUDA 7.5. I am okay with using the latest version but I am not sure what it is. I went to https://pytorch.org/previous-versions/ and thought that 0.1.12 is the latest one. What is the latest version that can be used with CUDA 7.5 ?

Hi,

It should be 0.3.0 according to the page you linked.

how do i use the latest version if my gpu is gtx 780m ?