After reading the documentation for squeeze() I am still confused about this in actual practice

I am a self-taught newbie and I apologize beforehand for asking this stupid question. I have this CNN that would output [Batch, 1, 1, 1] after using F.sigmoid() on the last layer. Then I use squeeze to turn the shape into [Batch] so I can feed it into my BCE_loss(). I don’t really understand what just happened, the document just says returns a tensor with all 1 dim removed.

If I look at the documentation for BCE_loss() I see

weight (Tensor, optional) – a manual rescaling weight is given to the loss of each batch element. If given, has to be a Tensor of size “nbatch”.

which makes sense why we use squeeze() but does the tensor get “damaged” or altered? Are C, H, W dimensions not needed to help calculate the loss? Its hard to visualize the math in my head, can someone please help me?

[B x 1 x 1 x 1] and [B] has the same number of elements. Nothing is lost. Your tensor is just reshaped.

Thanks! I guess I just needed some reassurance.