Hi, All
I have inquiry about torch.max(x,dim=1)[0] conversion to C++.
But in torch::max(x)[0], there is only one argument allowed. How do I make sure the conversion is the same and apply it to dim=1 please ?
Any suggestion is welcome ?
Thanks,
Hi, All
I have inquiry about torch.max(x,dim=1)[0] conversion to C++.
But in torch::max(x)[0], there is only one argument allowed. How do I make sure the conversion is the same and apply it to dim=1 please ?
Any suggestion is welcome ?
Thanks,
Hi,
There are multiple overloads of max: https://pytorch.org/cppdocs/api/function_namespaceat_1ac66d13a86d2ee2c974cd7d22de2ec409.html?highlight=max
In particular you want the one where you can specify the dim.
Note also that [0]
won’t work with the tuple<> output and you will need to use std::get<0>(the_tuple)
IIRC.
Thanks a lot.
I modified the code to be torch::max(x,1 false) but it is still incorrect
“No viable conversion from returned value of type ‘std::tuple<Tensor, Tensor>’ to function return type ‘torch::Tensor’”
See the second part of my comment above: you need to unpack the tuple values using std::get<0>()
.
Thanks for the guide.
std::get<0>(max(x,1,false)) seems to work.
There is another with torch.max(x1,1)[1].data
In this case, is
std::get<1>(max(x1,1,false)) equivalent to torch.max(x1,1)[1].data
because std::get<1>(max(x1,1,false)).data() is also valid.
Just to confirm
Hi,
In general, you should never use .data
. So you should change the python version of that code first
May I ask what is the best alternative for .data option ?
Will it cause issues in c++ ?
Thanks
It will cause issues on the python side as well.
If you want a Tensor with the same content but that does not share autograd history, you should use .detach()
(and the same c++ function when porting).