Will resize affect backpropogation?

Hi,

I’m wondering whether resize operation will affect backpropogration. If I have a tensor: input, with size (n1n2n3,1), then run the following codes where netA and netB are two networks:

inputv = Variable(input, requires_grad=True)
output = netA( inputv )
output = output.view(n1,n2,n3)
output = netB( output )
output.backward()

For “output = output.view(n1,n2,n3)”, if I delete it, will gradient changes? (For netB, any input will be resized as (:,n2,n3), so even if we don’t resize input manually before feeding into network, it still works without bugs )

Hi,

Resizing is seen as any other op and is perfectly differentiable. So you can .view() is any way you want and it will work !