No Alias Tensor Operations

Eigen and Xtensor have a concept of no aliasing

How do I tell torch to not do any memory allocations when doing tensor multiplication

torch::Tensor m = torch::rand({size, size});
torch::Tensor v = torch::rand({size, 1});
torch::Tensor r = torch::rand({size, 1});

r = m * v; // r is pre allocated and we should not do memory allocation. how do I tell torch?

general pattern is torch.mul(m,v,out=r), there should be a function with similar signature in c++

that’s about preallocation, I don’t think aliasing per se plays any role like in Eigen, as you’re using higher level wrappers with reference semantics. In particular, inplace ops are explicit (m.mul_(v)) as you’d run into autograd errors otherwise.

Thanks for the hint. Matrix multiplication with preallocated tensor can be done like this:

  torch::Tensor m = torch::rand({size, size});
  torch::Tensor v = torch::rand({size, 1});
  torch::Tensor r = torch::rand({size, 1});
  mm_out(r, m, v);