Can't import torch on my machine with CUDA 12.4

Hi there, I’m not sure if this is the right place to ask, but I just installed cuda tools to run some GPU-based machine learning stuff on my computer, and I’m running into an issue importing torch.

I’m on Ubuntu 22.04 with python 3.10.12.
I installed torch via pip3 install torch torchvision torchaudio

If I run python3:

Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.version.cuda
'12.1'

When I run nvcc -V my output is:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Thu_Mar_28_02:18:24_PDT_2024
Cuda compilation tools, release 12.4, V12.4.131
Build cuda_12.4.r12.4/compiler.34097967_0

When I run nvidia-smi my output is:

NVIDIA-SMI 550.67                 Driver Version: 550.67         CUDA Version: 12.4 

I have an NVIDIA GeForce RTX 3050 Ti.
Based on Table 3 of CUDA Compatibility :: NVIDIA GPU Management and Deployment Documentation, CUDA 12.4 seems like the right version for my NVIDIA driver.

I’m able to run python3 -c 'import torch' with no output, which I assume is good news.
That being said, when I try to import torch into a jupyter notebook, I get the error:

ModuleNotFoundError: No module named 'torch._custom_ops'; 'torch' is not a package

I was able to find torch._custom_ops myself, so I know it exists, but I’m not sure why it isn’t working in Jupyter Notebook?

I found this: python - Loading a pretrained model from torch.hub in Sagemaker - Stack Overflow but that didn’t seem relevant given I’m not using Sagemaker and simply trying to get my local machine ready to tackle GPU training tasks.

I would appreciate any help, insight, or simply comments telling me a better place to be asking this question.
Thank you

It seems your Jupter environment does not use the same PyTorch binary as your terminal, so you would need to fix it by e.g. reinstalling PyTorch into the same env of your Jupyter.

I don’t see any issues pointing towards CUDA as a native PyTorch op import is already failing.