Please ensure they have the same size

ValueError                                Traceback (most recent call last)
<ipython-input-80-fbef08da00d7> in <module>
     31         #loss = output['loss']
     32         #loss = net(img, target['bbox'], target['labels']).to(device)
---> 33         loss = criterion(output, clas).to(device)
     34         loss_bb = criterion_bb(outputs, box).to(device)
     35         loss.backward()

/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
    725             result = self._slow_forward(*input, **kwargs)
    726         else:
--> 727             result = self.forward(*input, **kwargs)
    728         for hook in itertools.chain(
    729                 _global_forward_hooks.values(),

/opt/conda/lib/python3.7/site-packages/torch/nn/modules/loss.py in forward(self, input, target)
    528 
    529     def forward(self, input: Tensor, target: Tensor) -> Tensor:
--> 530         return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction)
    531 
    532 

/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in binary_cross_entropy(input, target, weight, size_average, reduce, reduction)
   2517     if target.size() != input.size():
   2518         raise ValueError("Using a target size ({}) that is different to the input size ({}) is deprecated. "
-> 2519                          "Please ensure they have the same size.".format(target.size(), input.size()))
   2520 
   2521     if weight is not None:

ValueError: Using a target size (torch.Size([1, 23])) that is different to the input size (torch.Size([1, 19])) is deprecated. Please ensure they have the same size.

for i, (img, boxes, classes) in enumerate(train_loader):
        net.to(device)
        img = img.to(device)  
        box = boxes.to(device) 
        clas = classes.to(device)
        optimizer.zero_grad()
        output = net(img)
        loss = criterion(output, clas).to(device)
        loss_bb = criterion_bb(outputs, box).to(device)
        loss.backward()
        optimizer.step()

I get an error in loss when I pass tensor with lable and bbox into it. model vgg16_bn / I have 19 classes. I assume that the model or loss takes trying to push all the bboxes into the classifier layer, and each image has a different number of objects. how do i feed to bbox model,
label? or maybe this model is not suitable for multi-class + multi-label classification?

Based on the error message it seems that the model outputs a tensor in the shape [1, 19], which would correspond to the 19 classes you are dealing with, while the target has the shape [1, 23].
I don’t know, how you’ve manipulated the model to output bounding boxes, but the current output seems to only contain the class predictions, so you would most likely have to fix the target shape (are you appending the bbox coordinates to the clas tensor?).

thanks. I am a beginner and tried to use a classification model to detect objects. now I train faster_rcnn. while there is not enough knowledge to remake models for tasks.