Is there any map function in Pytorch? (something like map in python).
I need to map a 1xDxhxw tensor variable to a 1x(9D)xhxw tensor, to augment embedding of each pixel with its 8 neighbour embeddings. Is there any functionality in Pytorch that lets me do that efficiently?
I tried using map in Python this way:
n, d, h, w = embedding.size()
padder = nn.ReflectionPad2d(padding=1)
embedding = padder(embedding)
embedding = map(lambda i, j, M: M[:, :, i-1:i+2, j-1:j+2], range(1, h), range(1, w), embedding)
Hi! what happens if we need to apply the same function to all elements of the unfolded tensor? Map would still be very useful…
For example, when x is a tensor with an MNIST sample and f is an arbitrary function with input a tensor of size (M,) --> (1,):
x=torch.unfold(x,kernel_size=2,stride=2) #would give for MNIST size [1,4,196]
x=torch.cat([f(x[..., indx]) for indx in range(x.shape[-1])], dim=1) # is very slow!