Abou the libtorch | torch.jit.trace

I use torch.jit.trace as follow:

import torch
import torch.nn as nn

class simpleNet(nn.Module):
def init(self):
super(simpleNet, self).init()
self.layer1 = nn.Conv2d(3,16,3)
self.layer2 = nn.Conv2d(16,32,3)
self.layer3 = nn.Conv2d(32,64,3)

def forward(self, x):
    x = self.layer1(x)
    x1 = self.layer2(x)
    x2 = self.layer3(x1)
    return x1,x2

img = torch.rand(1, 3, 416, 416)
model = simpleNet()

traced_script_module = torch.jit.trace(model, img)
traced_script_module.save(“val_libtorch.pt”)

But i think there may be something wrong because there are two tensor about the model out put…

and in c++ api :
std::shared_ptrtorch::jit::script::Module module = torch::jit::load(“D:\lib_test\tensor_libtorch\val_libtorch.pt”);
assert(module != nullptr);
std::vectortorch::jit::IValue inputs;
inputs.push_back(torch::ones({ 1, 3, 416, 416 }));
auto out_put = module->forward(inputs).toTensor();

I will get wrong

Could you return the outputs as a tuple and get it in C++ via:

auto outputs = module->forward(inputs).toTuple();
torch::Tensor out1 = outputs->elements()[0].toTensor();
torch::Tensor out2 = outputs->elements()[1].toTensor();
2 Likes

Thank you for your help.

1 Like