I have a tensor given as:
auto my_tensor = at::zeros({5,5}, at::kCUDA)
and another function that takes a tensor as an argument:
at::tensor my_func(at::Tensor t)
{
// some operation changing the data in t
}
I want to pull call my_func on subsets of tensor
in a loop.
Currently I’ve tried
const int stride = 5;
for (int i = 0; i < 5; i++)
{
my_func(my_tensor.data() + i*stride)
}
But tensor.data() + 5
gives me a pointer–I need the at::tensor
. I can’t just dereference the pointer though, because it’s a pointer to the raw data, not to the at::tensor
object.
My goal is to update my_tensor
in-place so I don’t have to do unnecessary copying.
How can I do this? It seems perhaps something along the lines of a TensorAccessor
? I went to try it, though, and it told me packed_accessor()
didn’t exist (Pytorch 0.4.1).