Binary Classification batch_y VS y_pred

I am working on a binary classification problem. My batch_y and y_pred do not have the same shape, yet I get no warnings/errors when I compute the loss with nn.BCELoss(y_pred, batch_y):

batch_y.shape:  torch.Size([10, 1, 1])
tensor([[[0.]],

        [[1.]],

        [[0.]],

        [[0.]],

        [[0.]],

        [[1.]],

        [[1.]],

        [[1.]],

        [[1.]],

        [[0.]]], device='cuda:0')

y_pred.shape:  torch.Size([10, 1])
tensor([[0.5170],
        [0.5114],
        [0.5103],
        [0.4971],
        [0.4974],
        [0.5024],
        [0.5008],
        [0.4954],
        [0.5035],
        [0.4987]], device='cuda:0', grad_fn=<SigmoidBackward>)

A more extreme version of this occurs when I do classification with three classes instead of two. Here again, I get no warnings/errors when I compute the loss with nn.CrossEntropyLoss(y_pred, batch_y):

batch_y.shape:  torch.Size([3])
tensor([1, 2, 0], device='cuda:0')

y_pred.shape:  torch.Size([3, 3])
tensor([[-0.0718, -0.1237, -0.1143],
        [-0.0757, -0.1294, -0.1150],
        [-0.0792, -0.1128, -0.1106]],
       device='cuda:0', grad_fn=<ThAddmmBackward>)

Any ideas why this might be the case? Could this potentially (negatively) impact training?

Which PyTorch version are you using?
This code:

criterion = nn.BCELoss()
output = torch.sigmoid(torch.randn(10, 1, 1))
target = torch.randint(0, 1, (10, 1)).float()
loss = criterion(output, target)

raises a warning:

UserWarning: Using a target size (torch.Size([10, 1])) that is different to the input size (torch.Size([10, 1, 1])) is deprecated. Please ensure they have the same size.

in a slightly old nightly build 1.2.0.dev20190718.

Regarding the nn.CrossEntropyLoss, the shapes are correct.
This loss function expects a model output containing the class logits in the shape [batch_size, nb_classes, *] and a target containing the class indices in the shape [batch_size, *], where the asterisk denotes additional dimensions.