I set up my PyTorch environment following the ’ [environment_gpu.yml’ file from this Github Project https://github.com/graphdeeplearning/graphtransformer, which goes like this:
name: graph_transformer
channels:
- pytorch
- dglteam
- conda-forge
- fragcolor
- anaconda
- defaults
dependencies: - cudatoolkit=10.2
- cudnn=7.6.5
- python=3.7.4
- python-dateutil=2.8.0
- pip=19.2.3
- pytorch=1.6.0
- torchvision==0.7.0
- pillow==6.1
- dgl-cuda10.2=0.5.2
- numpy=1.16.4
- matplotlib=3.1.0
- tensorboard=1.14.0
- tensorboardx=1.8
- future=0.18.2
- absl-py
- networkx=2.3
- scikit-learn=0.21.2
- scipy=1.3.0
- notebook=6.0.0
- h5py=2.9.0
- mkl=2019.4
- ipykernel=5.1.2
- ipython=7.7.0
- ipython_genutils=0.2.0
- ipywidgets=7.5.1
- jupyter=1.0.0
- jupyter_client=5.3.1
- jupyter_console=6.0.0
- jupyter_core=4.5.0
- plotly=4.1.1
- scikit-image=0.15.0
- requests==2.22.0
- tqdm==4.43.0
- pip:
- tensorflow-gpu==2.1.0
- tensorflow-estimator==2.1.0
- tensorboard==2.1.1
However, it raised warning ‘NVIDIA RTX A5000 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.’
It seems that my environment is unable to work, and the command ‘model.to(device)’ just froze and nothing shows up.
I have read other related posts that suggests I could solve the problem with:
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
But sadly it shows conflicts and does not work as well, can anybody help me with this problem?