Recently i installed cuda toolkit from nvidia of version 12.6 but its not working when i m trying to acess with torch why is it so? i downloaded torchvision 12.4 could u please lemme know how to solve thsi issue?
Your locally installed CUDA toolkit won’t be used unless you build PyTorch from source or a custom CUDA extension since the PyTorch binaries ship with their own CUDA runtime dependencies.
How to built it ? Could you lemme know?
You don’t need to build PyTorch and can just install the binaries (pip wheels or condo binaries) as described here.
I tried and downloaded that for 12.4 but still it’s not working.
Could you post the installation log as well as the output of python -m torch.utils.collect_env
?
Its says no module named torch.utils.collect_env?