I am playing around with creating a custom layer for training. At some point on this path, I ended up with a tensor of shape (N x M x H). I would like to apply math operations to the last dimension of this tensor (for the sake of simplicity let’s say a sum of elements) to end up with a tensor of shape (N x M). I found out that the apply function would do something like that but it will apparently not work on CUDA.
Is there a way to do something like this in an elegant and efficient way?
Hi, how are you using apply for this? If available, I would just pass the dim arg in whatever math operation you’re calling on your tensor, e.g.: a.sum(dim=2)
That’s obviously an option, however a method I want to call on this dimension is more complex and written by me. It just requires a single dimension tensor with values and returns a single value as a result. Something like this:
def sth(values: torch.Tensor) -> float:
""" Exemplary function, I am aware of a pytorch built-in sum ;-) """
return sum(values)
torch.stack(
tuple(
torch.stack(
tuple(
map(sth, elem.unbind(0)))
)
for elem in a
)
)
Excuse my assumption
It’s hard to say without knowing more about your custom function; you might try using torch.einsum if it’s a matrix operation?
EDIT: @mkorycinski You also might be able to vectorize your custom function; that will certainly be more performant. Feel free to share the function if you’d like help with the vectorized implementation