I am trying to use PyTorch for the first time with Pycharm. When trying to use cuda, it is showing me this error
Traceback (most recent call last):
File "C:/Users/omara/PycharmProjects/test123/test.py", line 4, in <module>
my_tensor = torch.tensor([[1, 2, 3], [4, 5, 6]], dtype=torch.float32, device="cuda")
File "C:\Users\omara\anaconda3\envs\deeplearning\lib\site-packages\torch\cuda\__init__.py", line 166, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
I already installed cuda toolkit using the pytorch command in the anaconda
Could you check, if torchgeometry might have uninstalled your previous PyTorch installation and installed a CPU-only version instead? During the install step of torchgeometry the logs should indicate this and you might want to install it via pip install ... --no-dependencies or change the requirement for this package.
For 2022 readers, please go to the official PyTorch website found here and select the appropriate choices in the table they provide. Copy and paste the auto-generated command provided, which will uninstall existing torch/torchvision/torchaudio versions and install the CUDA enabled versions.
If you are working in a conda environment, please remove existing conda install torch versions before adding packages to pip.
Could you show some install logs of e.g the attempt to install the current 2.0.0+cu117 pip wheel in an empty and new environment, please?
The log could give us a clue if e.g. pip is unable to find the right wheel if your Python version is too old or any other issue occurs.
Hi I have got a new laptop with RTX 4060 with CUDA 12.0. Realized that PyTorch does not provide support for CUDA 12.0. But the only way we can run is using a Docker container - PyTorch | NVIDIA NGC. Could you please suggest any alternative approaches. I am new to PyTorch and is there an easier way to get this working.
The PyTorch binaries ship with their own CUDA runtime and CUDA libraries (such as cuBLAS, cuDNN, NCCL, etc.). Your locally installed CUDA toolkit will be used if you build PyTorch from source or custom CUDA extensions. For your 4060 you can install the current stable or nightly PyTorch binaries with CUDA 11.8.