Does it affect the output if I reshape or permute a tensor after convolution or graph convolution operation?

I have an input graph and its adjacency. After performing graph convolution my output shape is a 3D tensor (ex. 256,17,3). Now, for passing this output to Conv2d, I’m reshaping the output to a 4D tensor (ex. 256,1,17,3). So does it affect the output if I reshape or permute a tensor after convolution or graph convolution operation?

In your example you are adding a channel dimension and the conv layer would thus use the last two dimensions as the spatial size.

Generally yes. Reshaping or permuting a tensor will affect the next operations as the layer(s) could use the input in different ways. E.g. in your previous example you could also unsqueeze another dimension such as dim2, which would yield a tensor of [batch_size=256, channels=17, height=1, width=3], which would then also change the output of the next conv layer.

1 Like

@ptrblck thanks for the explanation