when I ran my code in server with GPU is ok
but when I ran the code in another pc using pycharm remote local,it shows:
“AssertionError: Torch not compiled with CUDA enabled”
when I ran my code in server with GPU is ok
If there is no GPU on the machine and you try to use cuda elements, it will fail with this error message because you try to use cuda tools but they are not installed.
Is there other method can I use GPU?Jupyter?
Is your pycharm suppose to run on the server with GPU?
It doesn’t matter what python interpreter you are using. To be able to use GPU, you need a computer with a GPU and install pytorch on this computer.
Hi…i have the same problem
i installed pytorch and my laptop has GPU also
what’s wrong with that that i take the same error?
You would need to give more informations here.
Like what is your local machine, how do you install pytorch, what is already installed on the machine (do you already have cuda installed)?
I am running this code:
I did not install Cuda and i don’t want to do
and just installed pytorch on windows10
is there any way in this code that i run it on cpu???
By the way , i installed torch by
pip3 install torch==1.3.0+cpu torchvision==0.4.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
If you don’t want to use cuda, don’t use it
Make sure that the code you run does not use cuda stuff and you won’t see this error anymore !
Thanks for your response @alban …yeah i don’t want it
I deleted all of cuda stuff
but i don’t know where i am wrong that still it gives me this error
in one line i see this code:
parser.add_argument("–device_id", type=int, default=0)
that i think it uses cuda as default …
do u have any idea what value i set for default that it uses cpu
i used -1 but it gives me error that it can not be negative
You should get a stack trace with the error. Can you share it? It should point at the place where cuda is being used
this is the full of error
type or pWarning (from warnings module): File "C:\python\Python37\lib\site-packages\sklearn\externals\joblib\externals\cloudpickle\cloudpickle.py", line 47 import imp DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses Traceback (most recent call last): File "C:\Users\Ati\Desktop\bindsnet-master\examples\mnist\reservoir.py", line 95, in <module> network.to("cuda") File "C:\python\Python37\lib\site-packages\torch\nn\modules\module.py", line 426, in to return self._apply(convert) File "C:\python\Python37\lib\site-packages\torch\nn\modules\module.py", line 202, in _apply module._apply(fn) File "C:\python\Python37\lib\site-packages\torch\nn\modules\module.py", line 245, in _apply self._buffers[key] = fn(buf) File "C:\python\Python37\lib\site-packages\torch\nn\modules\module.py", line 424, in convert return t.to(device, dtype if t.is_floating_point() else None, non_blocking) File "C:\python\Python37\lib\site-packages\torch\cuda\__init__.py", line 192, in _lazy_init _check_driver() File "C:\python\Python37\lib\site-packages\torch\cuda\__init__.py", line 95, in _check_driver raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabledaste code here
There is the line
network.to("cuda") at line 95 of your
You need to remove this
.to() or change the argument to
"cpu" if you don’t want to use cuda.
if gpu: network.to("cuda")
i changed it but it doesn’t work
i think it says if gpu is true then run on cuda
so i think i have to change where gpu is true and make it default as false
but i don’t know where is that
Hey…thank you i solved it finally by this your sentence help:
“Make sure that the code you run does not use cuda stuff”
I deleted all of Cuda , gpu and device_id words
and it’s working now
Hi, I have an issue with using CUDA. I got this error “torch not compiled with cuda enabled” when I wanted to define a tensor on GPU.
for pytorch installation, I used:
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
and I have Geforce GTX 1080 Ti
Nvidia CUDA driver is 11.0.197. GPU version 451.48
I am using pycharm.
when I run torch.cuda.is_available() --> I get False.
What is the problem? Please advise me. thank you
Maybe the installation of the binary has failed.
Could you create a new conda environment, reinstall the binary, and post the install log here?
I have also same error.
I have install conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch cuda toolkit but i have print the following command and show the result print(torch.cuda.is_available())
I have created new environment and installed it.
please give me suggestion. how to resolv this error
Could you post the install log here, please?