Map function in Pytorch

Hi all,

Is there any map function in Pytorch? (something like map in python).

I need to map a 1xDxhxw tensor variable to a 1x(9D)xhxw tensor, to augment embedding of each pixel with its 8 neighbour embeddings. Is there any functionality in Pytorch that lets me do that efficiently?
I tried using map in Python this way:

n, d, h, w = embedding.size()
padder = nn.ReflectionPad2d(padding=1)
embedding = padder(embedding)
embedding = map(lambda i, j, M: M[:, :, i-1:i+2, j-1:j+2], range(1, h), range(1, w), embedding)

But it does not work for w > 2 and h > 2.

we dont hav a map function in pytorch. However, you can use some clever tricks to do what you want more efficiently. torch.unfold might help.

Many Thanks for your response.
I will think on what can be done using torch.unfold.

Thank you! torch.unfold really made things easier. :slight_smile:

Hi! what happens if we need to apply the same function to all elements of the unfolded tensor? Map would still be very useful…
For example, when x is a tensor with an MNIST sample and f is an arbitrary function with input a tensor of size (M,) --> (1,):

x=torch.unfold(x,kernel_size=2,stride=2) #would give for MNIST size [1,4,196][f(x[..., indx]) for indx in range(x.shape[-1])], dim=1) # is very slow!

Is there any way to do this efficiently now?