Reading the documentation https://pytorch.org/cppdocs/notes/tensor_basics.html it looks like accessor() takes a template value which specifies the dimensionality. Is this value allowed to be different than the source tensor? For example if I want to access a tensor as a logical 1-d array, but the source tensor is a possibly noncontiguous (coming from a slice operation) n-d tensor, am I allowed to do my_tensor.accessor<float, 1>()
?
I think I found the answer from the generated source code Tensor.h
// Return a `TensorAccessor` for CPU `Tensor`s. You have to specify scalar type and
// dimension.
template<typename T, size_t N>
TensorAccessor<T,N> accessor() const& {
static_assert(N > 0, "accessor is used for indexing tensor, for scalars use *data<T>()");
TORCH_CHECK(dim() == N, "expected ", N, " dims but tensor has ", dim());
return TensorAccessor<T,N>(data<T>(),sizes().data(),strides().data());
}
I’m a bit comfused. I apologize if this is kinda noob. But if N
is meant to be the same as dim()
, what’s the purpose of making N
as a parameter in the first place? Why not simply do TensorAccessor<T, dim()>
or somehing like that?
auto srcdata = score.accessor<float,3>();
cout << srcdata[0][0][0] << endl;
for (int i = 0; i < srcdata.size(0); i++) {
int max_index = 0;
float max_value = -1000;
for (int j = 0; j < srcdata.size(1); j++) {
for (int k =0;k< srcdata.size(2);k++)
{
cout << "maxvalue:" << srcdata[i][j][k] << endl;
if (srcdata[i][j][k] > max_value) {
max_value = srcdata[i][j][k];
max_index = k;
}
}
i use accessor as above:but i can not get the value by cout <<srcdata[i][j][k
.dim() is a runtime value, you cannot pass it into a template expression which expects a constexpr (constant known at compile time)
1 Like