I can't output the object detection test. No labels or bounding boxes, just images

pread_list = []
model.eval()
with torch.no_grad():

    for i in test_data:
        image = Image.open(i)
        image = transform(image)
        image = image.unsqueeze(0)
        image = image.to(device)
        output = model(image)  
        preds = torch.max(output, 600, 600)#, dim=1)[:, 1].tolist() 
        #id_list.append(_id)
        #pred_list.append(preds[0])

TypeError Traceback (most recent call last)
Input In [12], in <cell line: 3>()
9 image = image.to(device)
10 output = model(image)
—> 11 preds = torch.max(output, 600, 600)

TypeError: max() received an invalid combination of arguments - got (list, int, int), but expected one of:

  • (Tensor input)
  • (Tensor input, Tensor other, *, Tensor out)
  • (Tensor input, int dim, bool keepdim, *, tuple of Tensors out)
  • (Tensor input, name dim, bool keepdim, *, tuple of Tensors out)

torch.max expects the input, dim, and keepdim input arguments as explained in the docs while you are passing two integers to this call which seem to be a spatial size of the tensor.

1 Like