Does torch.squeeze break the computation graph?

Hi, for calculating my loss function, I’m using torch.squeeze to change the dimension of the output from [N,1,H,W] to [N,H,W]. I was wondering if this action breaks the computation graoh and cause failure of autograd?

No, squeeze won’t detach the tensor and Autograd will properly calculate the gradients for the original tensor as seen here:

x = torch.randn(2, 1, 2, 2, requires_grad=True)
y = x.squeeze(1)
y.mean().backward()
print(x.grad)
1 Like

Thank you @ptrblck !

1 Like