How to perform high dimensional convolution towards the output of model?

e.g.

out = Net(input)

out shape : batchSize * C * H * W

what if I want to do some convolution operation(like Gaussian) to the output ?

The straightforward way is to do that in a loop for each sample. But it is infeasible for the
consideration of efficiency.

Is there any good idea ?

def makeGaussian(size, fwhm=3, center=None):
    x = np.arange(0, size, 1, float)
    y = x[:, np.newaxis]

    if center is None:
        x0 = y0 = size // 2
    else:
        x0 = center[0]
        y0 = center[1]

    return np.exp(-4*np.log(2) * ((x-x0)**2 + (y-y0)**2) / fwhm**2)

g = makeGaussian(20, fwhm=5)

kernel = torch.FloatTensor(g)

kernel = torch.stack([kernel for i in range(3)])  # this stacks the kernel into 3 identical 'channels' for rgb images

batched_gaussian = Variable(torch.stack([kernel for i in range(batchSize)]))  # stack kernel into batches

after_net = nnf.conv2d(out, batched_gaussian)  # nnf is torch.nn.functional

You could do something like this, just make some filters and stack them into the appropriate shapes. I included the 2d gauss function from here just for clarity.

This will likely change the shape of your output images though so you will have to adjust padding, stride, etc. if that’s important.

Is there any way to do this implementation in pure pytorch and without using numpy?