Hey there. I’m new to this. I’m using google colab, I was just ignoring xformers till now, since there is this issue if you try to install xformers:
RuntimeError: Detected that PyTorch and torchvision were compiled with different CUDA versions. PyTorch has CUDA Version=11.7 and torchvision has CUDA Version=11.8. Please reinstall the torchvision that matches your PyTorch install.
but I would like to use Pytorch 2.0 so I can get better performance in Stable Diffusion.
Can someone help me install it the same way as this command does, but with pytorch 2.0?
!pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 torchtext==0.14.1 torchdata==0.5.1 --extra-index-url https://download.pytorch.org/whl/cu116 -U
The default version on Colab is now 2.0 with cuda 11.8
So you shouldn’t need to install anything new
but like I’ve said, this is the error I’m getting when starting it without that command line
RuntimeError: Detected that PyTorch and torchvision were compiled with different CUDA versions. PyTorch has CUDA Version=11.7 and torchvision has CUDA Version=11.8. Please reinstall the torchvision that matches your PyTorch install
I know its 2.0 that should be on google colab, but it doesn’t want to start, but instead gives that error message and shuts down my colab
Started to get the same error, and can’t find the solution. Any advice?
!pip3 install torch torch vision. Normally, this is enough for me to get it working, haven’t run into any issues doing this.
I’m trying to install Pytorch and Torch Vision but the version is not found, also when I try to install other version from the link it says it is not compatible with torch audio Can someone help me to know which versions of torch, torch vision and mmcv will work for me in this , i’m working on segformer
If you need to install an old release, such as
1.5.0 you would also need to check which Python version is used in your environment as it might be too new for the older release. The same applies for the CUDA version and the used GPUs.
i’m using cuda 12.2 which versions will work well
Your locally installed CUDA toolkit won’t be used unless you build PyTorch from source or custom CUDA extensions as the PyTorch binaries ship with their own runtime dependencies.