Torch not working with CUDA 11.0, returns false for is_available()

I have remote access via SSH to a user(not root) in an NVIDIA Tesla A100 40GB which i want to use for training purposes. For which i require to use pytorch enabled CUDA. I am using conda for installing it in a virtual env .The nvidia-smi on the device shows:

NVIDIA-SMI 450.142.00 Driver Version: 450.142.00 CUDA Version: 11.0

So i tried to install first pytorch for cuda with following command i found on the pytorch website

conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=11.0 -c pytorch

But it returned torch.cuda.is_available() to be false. Same was the case when i tried to install cuda 10.2 or cuda 11.1 with commands available on the website here but still returned to be false.

If i run the command

/usr/local/cuda-11.0$ cat version.txt

i get the output as follows: - CUDA Version 11.0.207

And if i run the command

dpkg -l | grep cuda-toolkit

I get the output as :-

rc cuda-toolkit-11-3 11.3.1-1 amd64 CUDA Toolkit 11.3 meta-package
rc cuda-toolkit-11-3-config-common 11.3.109-1 all Common config package for CUDA Toolkit 11.3.
rc cuda-toolkit-11-config-common 11.3.109-1 all Common config package for CUDA Toolkit 11.
rc cuda-toolkit-config-common 11.3.109-1 all Common config package for CUDA Toolkit.

Please help!

Make sure the NVIDIA driver is correctly installed and in case you have a local CUDA toolkit installed, build and run some CUDA examples.
Once this is verified, install the latest PyTorch binary using the CUDA11.1 or 11.3 runtime.