Building C++ library working on all LibTorch versions

Hi,

I was wondering if it is possible to build a C++ library that works on all of LibTorch 1.5, 1.6 and 1.7, that is, link against one LibTorch version and make it work on multiple LibTorch versions.

I tried building one shared library with a simple function, linking it with LibTorch 1.5 and 1.6. When I dlopen it after importing PyTorch 1.7 it crashes when calling torch::empty:

undefined symbol: _ZN3c104impl23ExcludeDispatchKeyGuardC1ENS_11DispatchKeyE

Linking with LibTorch 1.7 works fine.

My C++ library code looks like the following:

static at::Device get_device(DLContext ctx) {
  switch (ctx.device_type) {
   case kDLCPU:
    return at::Device(torch::kCPU);
    break;
   case kDLGPU:
    return at::Device(torch::kCUDA, ctx.device_id);
    break;
   default:
    // fallback to CPU
    return at::Device(torch::kCPU);
    break;
  }
}

extern "C" {

DLManagedTensor* TAempty(
    std::vector<int64_t> shape,
    DLDataType dtype,
    DLContext ctx) {
  auto options = torch::TensorOptions()
    .layout(torch::kStrided)
    .device(get_device(ctx))
    .dtype(at::toScalarType(dtype));
  torch::Tensor tensor = torch::empty(shape, options);
  return at::toDLPack(tensor);
}

};

@BarclayII
To my knowledge, this is not possible at the moment. Symbols in different version of libtorch are not exactly the same. Symbol might moved, their visibility might changed, etc.

Thanks! Then I guess I should build one library per LibTorch version.

@BarclayII
Just wondering, what are you trying to do? Benchmarking? Will the latest libtorch fit all your needs?

The reply is extremely late but I’m trying to reuse the PyTorch CUDA allocator in my C++ library. I intend to make my library support a list of different PyTorch versions.

Thanks for update and quick reply. I’ll be sure to keep an eye on this thread. Looking for the same info.

MyFordBenefits