GPU is not available for Pytorch

Hi to everyone,
I probably have some some compatibility problem between the versions of CUDA and PyTorch.
I can’t use the GPU and everytime I ran the command torch.cuda.is_available() the result is always FALSE.

I’m using Anaconda (on Windows 11) and I have tried many things (such as upgrading and downgrading variuos versions), but nothing has worked.

I have a GeForce MX150 and currently my versions are:
CUDA Version: 11.6
PyTorch Version: 1.13.1
Python Version 3.9.13

Thank you all

I have the same GPU in my Windows laptop and the 1.3.1+cu117 wheels work fine for me:

>>> import torch
>>> torch.__version__
>>> torch.cuda.is_available()
>>> torch.cuda.get_device_properties(0)
_CudaDeviceProperties(name='GeForce MX150', major=6, minor=1, total_memory=2048MB, multi_processor_count=3)
>>> torch.randn(1).cuda()
tensor([-0.9126], device='cuda:0')

Thank you for your answer.
Unfortunatelly, there seams that the ‘1.13.1+cu117’ version doesn’t work with conda, but only with pip. Did you used pip or conda to install it?

Thank you again.

After many attempts I managed to get things working (I don’t know exactly how and why).

I report some passages here if they might be of use to someone.

1)Open anaconda prompt as administrator.

2)Remove everything:

conda remove pytorch torchvision torchaudio cudatoolkit
conda clean --all

3)Instal things separately and activating tensorflow-gpu:

conda install -c anaconda cudatoolkit
conda create -n tf-gpu tensorflow-gpu
conda activate tf-gpu

4)Instal PyTorch (GPU version compatible with CUDA verison):

conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia