Can the fold function not sum over each filtered bit of image to return a reconstructed image, but return each filtered bit of image in its own separate channel?
I wrote some code that does this by abusing the batch argument. But, I still have to use a for loop. I want it to work in my training loop so I want it to do the same thing but in parallel. (assume the ones are a placeholder for the filter) If there is no way to do it I will just use this.
z = torch.zeros((16,4,16))
for i in range(16):
z[i,:,i] = torch.ones((4,)) # our "ones" filter
print(f.fold(z, (5,5,), kernel_size=2).shape)
print(f.fold(z, (5,5,), kernel_size=2))
torch.Size([16, 1, 5, 5])
tensor([[[[1., 1., 0., 0., 0.],
[1., 1., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]]],
[[[0., 1., 1., 0., 0.],
[0., 1., 1., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]]],
[[[0., 0., 1., 1., 0.],
[0., 0., 1., 1., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]]],
[[[0., 0., 0., 1., 1.],
[0., 0., 0., 1., 1.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]]],
[[[0., 0., 0., 0., 0.],
[1., 1., 0., 0., 0.],
[1., 1., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]]],
...etc