Why max() received an invalid combination of arguments - got (list, dim=int), but expected one of:?

model.eval()
with torch.no_grad():
    #step = 99999
    #for i in range(0, len(test), step):
    #    test_data = test.tolist()[i: i+step]
        
    for i, (images, targets, ImageIDs) in enumerate(train_loader):

        images = list(image.to(device) for image in images)
        targets = [{k: v.to(device) for k, v in t.items()} for t in targets]
        
        outputs = model(images)
        preds = torch.max(outputs, dim=1)
        
        id_list.append(images)
        pred_list.append(pred.item()) 
    
submissions = pd.DataFrame({'ImageID': id_list, 'predictions': pread_list})
submissions.to_csv('test_data/sample_submission.csv', index=False, header=True)

TypeError Traceback (most recent call last)
Input In [11], in <cell line: 2>()
10 targets = [{k: v.to(device) for k, v in t.items()} for t in targets]
12 outputs = model(images)
—> 13 preds = torch.max(outputs, dim=600)
15 id_list.append(images)
16 pred_list.append(pred.item())

TypeError: max() received an invalid combination of arguments - got (list, dim=int), but expected one of:

  • (Tensor input)
  • (Tensor input, Tensor other, *, Tensor out)
  • (Tensor input, int dim, bool keepdim, *, tuple of Tensors out)
  • (Tensor input, name dim, bool keepdim, *, tuple of Tensors out)

max takes a tensor as an input.
Try this

preds = torch.max(torch.tensor(outputs), dim=1)

Be careful of outputs has a dimension more than 2. (Because you call dim=1 in max function)

@NagaYu Is this solved?

1 Like

Be careful using torch.tensor as that will break your computation graph and no gradients will flow from outputs to your params. You’ll want to use torch.stack instead which has a grad_fn.

1 Like

I didn’t solve it.

RuntimeError Traceback (most recent call last)
Input In [11], in <cell line: 2>()
10 targets = [{k: v.to(device) for k, v in t.items()} for t in targets]
12 outputs = model(images)
—> 13 preds = torch.max(torch.tensor(outputs), dim=1)
15 id_list.append(images)
16 pred_list.append(pred.item())
RuntimeError: Could not infer dtype of dict

Could you print out outputs?

1 Like

yes… My error is ‘RuntimeError: Could not infer dtype of dict’.

Nope, I meant the variable outputs
Does it contain a dictionary or something else cannot cast into a torch tensor?

1 Like

There was no error there.

Can you share it though? Does your model return a Tensor or a list of Tensors?

1 Like

I haven’t done that yet.

Please print(outputs) if you want any help.

1 Like
print(output)

[{‘boxes’: tensor([], size=(0, 4)), ‘scores’: tensor([]), ‘labels’: tensor([], dtype=torch.int64)}, {‘boxes’: tensor([], size=(0, 4)), ‘scores’: tensor([]), ‘labels’: tensor([], dtype=torch.int64)}, {‘boxes’: tensor([], size=(0, 4)), ‘scores’: tensor([]), ‘labels’: tensor([], dtype=torch.int64)}, {‘boxes’: tensor([], size=(0, 4)), ‘scores’: tensor([]), ‘labels’: tensor([], dtype=torch.int64)}, {‘boxes’: tensor([], size=(0, 4)), ‘scores’: tensor([]), ‘labels’: tensor([], dtype=torch.int64)}, {‘boxes’: tensor([], size=(0, 4)), ‘scores’: tensor([]), ‘labels’: tensor([], dtype=torch.int64)}, {‘boxes’: tensor([], size=(0, 4)), ‘scores’: tensor([]), ‘labels’: tensor([], dtype=torch.int64)}, {‘boxes’: tensor([], size=(0, 4)), ‘scores’: tensor([]), ‘labels’: tensor([], dtype=torch.int64)}, {‘boxes’: tensor([], size=(0, 4)), ‘scores’: tensor([]), ‘labels’: tensor([], dtype=torch.int64)}]
In [ ]: