I’m trying to iterate over a tensor element-wize in C++, which is proving difficult.
I’ve been able to get the number of dimensions of a tensor (torch::Tensor tensor) using tensor.dim(), and I’m able to get the size of each dimension using torch::size(tensor, dim), however I can’t figure out to iterate over a tensor. for (int i = 0; i < sizeof(tensor) / sizeof(tensor[0]); i++) only iterates over each element of the first row.
Is there a way to iterate element-wize over a nth dimensional torch::Tensor?
You could try using tensor.sizes() or tensor.numel()
torch::Tensor t = torch::randn({ 2, 3, 4 });
float* ptr = (float*)t.data_ptr();
// iterate by each dimension
for (int z = 0; z < t.sizes()[0]; ++z)
{
for (int y = 0; y < t.sizes()[1]; ++y)
{
for (int x = 0; x < t.sizes()[2]; ++x)
{
printf("Element at [%d %d %d]: %f\n", x, y, z, *ptr++);
}
}
}
ptr = (float*)t.data_ptr();
// iterate through all elements
for (int i = 0; i < t.numel(); ++i)
{
printf("%dth Element: %f\n", i, *ptr++);
}
Note that this only works for contiguous tensors, so you want to either TORCH_CHECK is_contiguous or call auto tc = t.contiguous(), but in the latter case you cannot reliably modify t anymore.
As a convenient (but not terribly efficient for CPU because you likely won’t be using SIMD with it) method to index an arbitrarily strided tensor, you can use the array-like interface of t.accessor<...>(), but that requires you to know the number of dimensions.
As an alternative, you can do the index calcs yourself using stride and storage_offset.