I have a model which loads fine in C++, ( and works in Python ), but when I call ‘forward’ in C++ it fails with (unhandled) torch::jit::constant_not_supported_error exceptions from insertConstant.The model consists of convolutions followed by transposed convolutions. I’m working on visual studio 2017.
A simple network causing the problem:
self.common_part = nn.Sequential(
nn.Conv2d(1, 128, 5, 2,padding = 2),
nn.ReLU(),
nn.ConvTranspose2d(128, 1, 5, 2,padding = 2, output_padding = 1),
nn.ReLU())
with forward(self,x):
n = self.common_part(x)
return n
I think you’d need to post a more complete overview of what you’re doing (instantiating saving the model and the C++ side). If you use triple backticks ``` to mark the beginning and end of the code, you’ll keep the forums from screwing up the formatting.
Thank you for your answer, the C++ is simple and the input tensor is ok( I checked that, I also tried with torch::ones giving the same errors): Cuda or no Cuda : same errors.
I found some strange things. The exceptions can be ignored. As a sanity check I ran the example code with resnet18 of pytorch and this worked fine. When I train my model on the cpu the output of the forward function in C++ is fine. But when I train it on the GPU it is ‘None’, both on the cpu and on the gpu. In python this GPU trained model works fine both on the cpu and gpu. The cuda versions are the same ( 10 ).
An addition:
In the call stack there is an exception in checked_tensor_unwrap.
“”"
if (tensorTypeIdToBackend(expr.type_id()) != backend) {
AT_ERROR(“Expected object of backend “, backend, " but got backend “, tensorTypeIdToBackend(expr.type_id()),
" for argument #”, pos, " '”, name, “’”);
“””
the backend is CPU(0) the pos = 2 and the name = ‘weight’ the expr.typeid = 4
This seems a bug to me. If I run example code from pytorch, but move the model to the GPU before tracing I get the same problem in C++.
model = torchvision.models.resnet18()
# An example input you would normally provide to your model's forward() method.
example = torch.rand(1, 3, 224, 224)
model.cuda()
device = torch.device("cuda")
# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, example.to(device))
traced_script_module.save("d:\\tmp\\model.pt")
Addition:
If I move the model to the CPU before tracing and saving I can run ‘forward’ successfully on the CPU. But not on the GPU. Probably a bug I suppose.