I am trying to use PyTorch for the first time with Pycharm. When trying to use cuda, it is showing me this error
Traceback (most recent call last):
File "C:/Users/omara/PycharmProjects/test123/test.py", line 4, in <module>
my_tensor = torch.tensor([[1, 2, 3], [4, 5, 6]], dtype=torch.float32, device="cuda")
File "C:\Users\omara\anaconda3\envs\deeplearning\lib\site-packages\torch\cuda\__init__.py", line 166, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
I already installed cuda toolkit using the pytorch command in the anaconda
how did you install pytorch. Did you use the correct install. This is the pip install: pip install torch===1.7.1+cu110 torchvision===0.8.2+cu110 torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
I created a new environment, and it is working well. I don’t really know what the exact problem is, but it is solved.
Thank you for your assistance, Dwight
I have print(torch.cuda.is_available()) = False after install torchgeometry. My PyTorch worked well until I installed it. I will update if I find any solution.
Could you check, if torchgeometry might have uninstalled your previous PyTorch installation and installed a CPU-only version instead? During the install step of torchgeometry the logs should indicate this and you might want to install it via pip install ... --no-dependencies or change the requirement for this package.
In the end I switched from Conda to virtualenv and it worked at the first try.
I created my virtualenv with virtualenv virtualenv_name
Then I did
workon virtualenv_name
then, I installed pytorch as it is specified on the official pytorch website (but selecting pip instead of conda) as package manager (Start Locally | PyTorch).
For 2022 readers, please go to the official PyTorch website found here and select the appropriate choices in the table they provide. Copy and paste the auto-generated command provided, which will uninstall existing torch/torchvision/torchaudio versions and install the CUDA enabled versions.
If you are working in a conda environment, please remove existing conda install torch versions before adding packages to pip.
Could you show some install logs of e.g the attempt to install the current 2.0.0+cu117 pip wheel in an empty and new environment, please?
The log could give us a clue if e.g. pip is unable to find the right wheel if your Python version is too old or any other issue occurs.
Hi I have got a new laptop with RTX 4060 with CUDA 12.0. Realized that PyTorch does not provide support for CUDA 12.0. But the only way we can run is using a Docker container - PyTorch | NVIDIA NGC. Could you please suggest any alternative approaches. I am new to PyTorch and is there an easier way to get this working.
The PyTorch binaries ship with their own CUDA runtime and CUDA libraries (such as cuBLAS, cuDNN, NCCL, etc.). Your locally installed CUDA toolkit will be used if you build PyTorch from source or custom CUDA extensions. For your 4060 you can install the current stable or nightly PyTorch binaries with CUDA 11.8.
Could you post the used pip/conda install command as well as the output, since it should show which PyTorch binary is installed and should show the CPU-only binaries.
nvidia-smi returns the driver version and the CUDA version corresponding to this driver.
Assuming you have installed the cuda-toolkit conda binary, the output of nvidia-smi won’t relate to it. Also, why do you install it instead of using the provided commands?