Distingush between cuda of system and cuda of virtual environment

I should run a ready project that it has some requirement of special version of python and Pytorch so I run a virtual environment and install python 3.6 and pythorch with code

conda install pytorch torchvision cudatoolkit=10.0 -c pytorch

I have to run this project with gpu rtx 3090 (24 gig ram)[when I run command nvidia-smi it need and compatible with cuda up to 11]
on the other hand When I run that venv with special python and pytorch that is installed on it, running project can’t detect gpu.
I noticed that venv can’t distinguish between the system cuda and venv cuda and it is affected by the system cuda(11). On the other hand, I cannot install Cuda above 10 for Python 3.6.
How could I solve this problem?

Could you explain this in more details, please?
The PyTorch binaries ship with their own CUDA dependencies and will only need a properly installed NVIDIA driver. Your system CUDA toolkit will be used if you build PyTorch from source or a custom CUDA extension.

I also don’t understand this claim as the CUDA toolkit doesn’t depend on Python.

Hi again, Thanks for your reply. Actually, if I want to explain in more details, I have an ubuntu system with GPU rtx3090 and after installing different dependable driver, that minimum driver that could be installed, it was 11 version and driver with version 10 couldn’t be installed so Cuda of my system became 11.
On the other hand, author of one project mentioned if I want to run his project, I should install python 3.6 and pytorch with command.
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
I did both of them and both of them installed successfully in one virtual environment(venv) but when I run following code,

torch.cuda.is_available()

it returned false and while running project that project couldn’t detect and use GPU.
It worth mention I try to install upper version of Pytorch for this version of python in this VENV but I couldn’t install.
This project for running, needs GPU very much and I don’t know how can solve this problem.

As already mentioned, the PyTorch binaries ship with their own CUDA dependencies and for your 3090 you would need to install any PyTorch binary with CUDA 11.x or 12.x from here. Note that the currently released binaries also don’t ship with CUDA 10.2 anymore so you can install any new release.

I use this website and try all of version of Pytorch but for python 3.6 I couldn’t install any version of pytorch with cuda 11.x or 12.x and when I want to install it give me errors.

This is expected since Python 3.6 is at its EOL since Dec 2021 as described here which is why we are not building our binaries for it anymore.