Ubuntu C++ gpu load model error

I created a torch script file from pretrained resnet 18 model in python:

class Image2VectorResnet18():
def init(self, frameWidth, frameHeight, device):
self.device = device
self.model = ptcv_get_model(‘resnet18’, pretrained=True)
self.model = self.model.to(self.device)
self.model.eval()
self.normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])

def forward(self, image):
    image = self.normalize(self.toTensor(image)).unsqueeze(0).to(self.device)
    vector = self.model(image)
    return vector

Parameters

frameWidth = 224
frameHeight = 224

select cuda as device

device = torch.device(“cuda”)

initialise model

image2Vector = Image2VectorResnet18(frameWidth, frameHeight, device)
model = image2Vector.model

overwrite output stage with identity

model.output = torch.nn.Identity()
newmodel = model

create dummy input

image = torch.ones(1, 3, frameWidth, frameHeight).to(device)

create jit file

traced_script_module = torch.jit.trace(newmodel, image)
traced_script_module.save(‘resnet18.pt’)

When I try to load this in c++ with:

moduleFeatureVector = torch::jit::load(“resnet18.pt”);

Works if I use ‘cpu’ as device in Python, but if I use 'cuda, I get:

Could not run ‘aten::empty_strided’ with arguments from the ‘CUDA’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit Internal Login for possible resolutions. ‘aten::empty_strided’ is only available for these backends: [CPU, Meta, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].

CPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]
Meta: registered at aten/src/ATen/RegisterMeta.cpp:12703 [kernel]
BackendSelect: registered at aten/src/ATen/RegisterBackendSelect.cpp:665 [kernel]
Python: registered at …/aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]
Named: registered at …/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: fallthrough registered at …/aten/src/ATen/ConjugateFallback.cpp:22 [kernel]
Negative: fallthrough registered at …/aten/src/ATen/native/NegateFallback.cpp:22 [kernel]
ADInplaceOrView: fallthrough registered at …/aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: registered at …/torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]
AutogradCPU: registered at …/torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]
AutogradCUDA: registered at …/torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]
AutogradXLA: registered at …/torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]
AutogradLazy: registered at …/torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]
AutogradXPU: registered at …/torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]
AutogradMLC: registered at …/torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]
AutogradHPU: registered at …/torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]
AutogradNestedTensor: registered at …/torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]
AutogradPrivateUse1: registered at …/torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]
AutogradPrivateUse2: registered at …/torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]
AutogradPrivateUse3: registered at …/torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]
Tracer: registered at …/torch/csrc/autograd/generated/TraceType_2.cpp:11425 [kernel]
UNKNOWN_TENSOR_TYPE_ID: fallthrough registered at …/aten/src/ATen/autocast_mode.cpp:466 [backend fallback]
Autocast: fallthrough registered at …/aten/src/ATen/autocast_mode.cpp:305 [backend fallback]
Batched: registered at …/aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]
VmapMode: fallthrough registered at …/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]

Based on the error message it seems as if you are trying to load a traced model onto the GPU while your libtorch build doesn’t support the CUDA backend. Make sure to install the libtorch version built with CUDA or build it from source using CUDA.