Compatiability about using Libtorch to load pytorch model

Is there any forward-compatibility or backward-compatibility in pytorch-libtorch?
When I use pytorch 1.11.0 (I print torch.version is 1.10.2) to build a model, and use libtorch 1.11.0 to load. (use torch::jit::save and use torch::jit::load), and It causes problem:

I think the problem here is specifically that your model wants GPU but you are running it on what looks like a CPU only version of libtorch.
Typically forward compatibility is reasonable these days, but it is always good to do your own testing.
(What is terrible is PyTorch ABI compatibility, but that is a different story.)

Best regards

Thomas

Thanks for your reply!
image
But I check my machine, it did running on GPU machine, and my libtorch verison is cxx11 with CUDA (https://download.pytorch.org/libtorch/cu113/libtorch-cxx11-abi-shared-with-deps-1.11.0%2Bcu113.zip)

I cannot comment on your exact setup, but note that CUDA is suspiciously absent from the list of backends in the error message, so something is up with that.

Best regards

Thomas

I find another curious thing.
I use Pytorch to testify torch.cuda.is_available, it returns true;
And I use libtorch - torch.cuda.is_available, it returns false, how can it happen?

import torch
print(torch.cuda.is_available())
#include <torch/cuda.h>
#include <iostream>

int main(void) {
  std::cout << "CUDA DEVICE COUNT: " << torch::cuda::device_count() << std::endl;
  if (torch::cuda::is_available()) {
    std::cout << "CUDA is available! Training on GPU." << std::endl;
  }
  return 0;
}

OK, I finally know what is happening!
I use cuda 11.1 and libtorch 1.10.2+cuda 11.3, BUT!CUDA is backward compatiable, so maybe some abi incompatibility cause this situation.