Remove for loop filling Tensor

Hi

I have the following snippet

329         for i in range(n):
330             foo[i, :] = functor(x - i)
331             bar[i, :] = functor(y - i)

Clearly these operations are parallelizable and I’m looking into something like numpy apply_along_axis or any operation that would make my stuff parallel.

Functor uses PyTorch operations.

Any ideas?

Solved using broadcasting where functor is an elementwise op and x,y are 1D vectors

326         range_tensor = torch.arange(n, out=torch.FloatTensor()).reshape(-1, 1)
327         foo = functor(x.reshape(1, -1) - range_tensor)
328         bar = functor(y.reshape(1, -1) - range_tensor)