How to use torch.nn.Unfold in order to get a right output tensor size

I am trying to adapt this code to the Camvid dataset.

class PicanetG(nn.Module):
    def __init__(self, size, in_channel):
        super(PicanetG, self).__init__()
        self.renet = Renet(size, in_channel, 100)
        self.in_channel = in_channel

    def forward(self, *input):
        x = input[0]
        size = x.size()
        kernel = self.renet(x)
        kernel = F.softmax(kernel, 1)
        print(x.size(), kernel.size())
        kernel = kernel.reshape(size[0], 100, -1)  # [20, 100, 4, 4]
        x = F.unfold(x, kernel_size=(5, 5), padding=[1,1], dilation=[2, 2],stride=5)  # (x, [10, 10], dilation=[3, 3])
        print(x.size(), kernel.size())
        x = x.reshape(size[0], size[1], 10 * 10)
        x = torch.matmul(x, kernel)
        x = x.reshape(size[0], size[1], size[2], size[3])
        return x

The error message from this class is

torch.Size([20, 1024, 18, 24]) torch.Size([20, 100, 4, 4])
torch.Size([20, 25600, 6]) torch.Size([20, 100, 16])
  0%|                                                                                                                                         | 0/19 [00:01<?, ?it/s]
Traceback (most recent call last):
  File "train.py", line 81, in <module>
    pred, loss = model(img, mask)
  File "/home/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/VAE/model.py", line 46, in forward
    dec, _pred = self.decoder[i](en_out[5 - i], dec)
  File "/home/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/VAE/model.py", line 146, in forward
    fmap_att = self.picanet(fmap)  # F_att
  File "/home/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/VAE/model.py", line 176, in forward
    x = x.reshape(size[0], size[1], 10 * 10)
RuntimeError: shape '[20, 1024, 100]' is invalid for input of size 3072000

I have problem to change different variables of torch.nn.Unfold in order to get exactly the correct output size. I will appreciate if someone can help.

What parameters are you willing to change in unfold and can you also specify the output shape you require?
Here is a video which explains the unfold operation very nicely [PyTorch - Convolution under the hood (Unfolding/Folding) - YouTube]