Adding/padding channels to tensors


I would like to add channels to a tensor, so that from [N,C,H,W] it becomes [N, C+rpad, H, W], potentially filled with zeros for the new channels, all that in a forward() function of a nn.Module.

From what I gathered, padding in the width and height dimensions is implemeted in F.pad(), which calls functions from nn._functions.padding, such as the ones of ConstantPad2d. Unfortunately padding channels is missing here.

I also see that a legacy module implement some of those things, but it does not seem to create new Variables, and I am not sure if I should use something like this.

Anyways, I am open to suggestions as to clean&concise ways to implement zero-padding in the channels to use in a nn.Module.forward() function.


1 Like

If your original tensor is of the size you gave above, your can use the following code to pad inp:

padding = Variable(torch.zeros(N, pad, H, W))
padded_inp =, padding), 1)

Quite nice, works perfectly. Thank you! I don’t know why I was afraid of creating a Variable in forward().