Cuda and Pytorch compatibility issue in windows 11, graphics NVIDIA GTX 1650

Can someone guide me the proper installation of cuda, tensorflow and pytorch from beginning with proper compatible versions for my local machine. And which python-version, so that its packages should be compatible with cuda, torch and tensorflow.

| NVIDIA-SMI 536.67 Driver Version: 536.67 CUDA Version: 12.2 |
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
| 0 NVIDIA GeForce GTX 1650 WDDM | 00000000:01:00.0 Off | N/A |
| N/A 44C P8 7W / 50W | 0MiB / 4096MiB | 0% Default |
| | | N/A |

| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
| No running processes found |

Your GTX 1650, with a compute capability of 7.5, is supported in all currently released PyTorch binaries and you can install the stable or nightly release from here.
You would only need to install an NVIDIA driver (which seems to be the case already) as the binaries ship with their own CUDA dependencies.

I tried for :-
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.7 -c pytorch -c nvidia

it worked:–

import torch

C:\Users\arnit\anaconda3\envs\PyTorch\python.exe C:\Users\arnit\PycharmProjects\pythonProject\

but the value comes false when I run a project in D: drive… Can you help me in adding path ?

Sorry, but I’m not familiar enough with your Windows environment and don’t know what might be causing the issue. I would start by checking which environment is used and that the same python.exe from the same PyTorch env is used.

yeah i’ve resolved the issue of compatibility but another issue i’m having:-

E torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 308.00 MiB (GPU 0; 4.00 GiB total capacity; 5.51 GiB already allocated; 0 bytes free; 5.79 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

can you help me in resolving this error? i can’t figure out how to use less memory

You would need to reduce the batch size or use a smaller model to reduce the memory usage.
Alternatively, you could also check torch.utils.checkpoint to trade compute for memory.

Thank you … reducing batch_size worked fine

you first create an environment in conda with the required python version mostly 3.8.* and then pip3 install instead of conda install inside the environment and then you can use the environment as your python interpreter will use the torch GPU as your kernel in running the program