Hi!
I tried m_ptrModule->get_method(“forward”).graph()->outputs().size();
But that returns 1 also for a model with two outputs.
Any help would be appreciated.
Regards,
Roos
Hi!
I tried m_ptrModule->get_method(“forward”).graph()->outputs().size();
But that returns 1 also for a model with two outputs.
Any help would be appreciated.
Regards,
Roos
Hi Roos,
Very likely, the Python inheritance of TorchScript shows, i.e. that functions pretending to be returning multiple values actually return a tuple.
So
import torch
@torch.jit.script
def d(x):
return x,x
print(d.graph)
gives
graph(%x.1 : Tensor):
%3 : (Tensor, Tensor) = prim::TupleConstruct(%x.1, %x.1)
return (%3)
So a proper check would check whether auto tt = graph()->output()->type()->cast<TupleType>()
and then check tt->elements().size()
or somesuch. The elements()
also has the types of the returns if it matters.
Best regards
Thomas
Hi Thomas,
Thanks for your answer. Unfortunately your solution does not compile, e.g.
'output() is not a member a torch::jit::Module
and more.
In the end I solved it by passing dummy input through the module like this:
auto outputs = m_ptrModule->forward(inputs);
if (outputs.isTuple())
{
m_nNumOutputs = outputs.toTuple()->elements().size();
}
else
{
m_nNumOutputs = 1;
}
But personally I think this is a hack. I can ask the number of inputs:
m_ptrModule->get_method("forward").num_inputs();
But not the number of outputs…
Greetings,
Roos
Sorry, output()
is a shorthand for asserting outputs.size()==1
and then using outputs[0]
- but for node, not graph… The other bits should still work.
Besteht regards
Thomas
Hi Thomas,
Thank you. Now it works:
auto outputs = m_ptrModule->get_method("forward").graph()->outputs();
auto asTuple = outputs[0]->type()->cast<c10::TupleType>();
if (asTuple)
{
m_nNumOutputs = asTuple->elements().size();
}
else
{
m_nNumOutputs = 1;
}
Greetings,
Roos
Just try
auto outputs = m_ptrModule->get_method("forward").graph()->outputs().size();