RuntimeError: Could not infer dtype of dict

samples = next(iter(test_dataloader))
image, lbls1, lbls2 = samples
actual_age_data = lbls1[:10].numpy()
outputs_1 = net(image[:8].float())
pred = torch.max(torch.tensor(outputs_1), dim=1).data.numpy()
print(f’Prediction age data: {pred}‘)
print(f’Actual age data: {actual_age_data}’)
#print("outputs_1: ",type(outputs_1))
As i have expected the output of actual vs predicted data for age prediction. But ‘RuntimeError: Could not infer dtype of dict’ this error comes. I’m not understanding how to correct my code. Thank you in advance.

Based on the error message it seems you are trying to create a tensor from a dict as seen here:

d = {"a": torch.randn(1)}
x = torch.tensor(d)
# RuntimeError: Could not infer dtype of dict

I don’t know where this is the cast, but would guess it could be in torch.tensor(outputs_1).
Check if outputs_1 is indeed a dict and index the internal tensors via valid keys to index or create a tensor.

Yes, the error with the torch.tensor(outputs_1) . I have checked the type(output_1) it shows the dtype dict. and I have also print(output_1) it shows

outputs_1: {‘label1’: tensor([[-2.1833e+01, -7.4740e+00, -6.4569e+00, -7.2793e+00, -6.1732e+00,
-2.6139e+00, -4.6173e+00, -4.4403e+00, -3.1389e+00, -2.0839e+00,
-3.1995e+00, -2.4624e+00, -2.4931e+00, -1.8438e+00, -2.4990e+00,
-3.0997e+00, -5.7571e-02, -1.6917e+00, -1.6546e+00, -2.8760e+00,
-1.9103e+00, -1.0984e-01, -1.3449e+00, -5.3740e-01, 6.6199e-01,
1.9206e+00, 2.0636e+00, 1.6582e+00, 2.7317e+00, 1.5725e-01,
1.8700e+00, 3.7317e-01, 2.1982e+00, -1.1778e+00, 8.7232e-01,
2.4418e+00, 6.9796e-01, -3.9097e-01, 8.6629e-01, 4.8773e-01,
-2.0673e-01, -7.4729e-01, -2.5388e+00, -2.3810e+00, -2.4231e+00,
-7.9829e-01, -2.4935e+00, -3.4962e+00, -1.5243e+00, -3.9984e+00,
-2.6961e+00, -1.2895e+00, -4.5823e+00, -2.4308e+00, -2.3773e+00,
-5.0415e+00, -5.7367e+00, -5.2446e+00, -6.0081e+00, -5.3693e+00,
-4.0731e+00, -5.2827e+00, -4.3017e+00, -3.5675e+00, -3.6378e+00,
-2.3023e+00, -2.7249e+00, -4.2130e+00, -4.5237e+00, -6.5374e+00,
-5.1393e+00, -5.8990e+00, -7.0035e+00, -7.7819e+00, -6.8964e+00,
-6.0729e+00, -4.9166e+00, -2.0606e+01, -5.5203e+00, -9.4577e+00,
-5.5765e+00, -9.3993e+00, -9.3344e+00, -9.5835e+00, -1.0025e+01,
-6.1867e+00, -1.0492e+01, -1.3903e+01, -1.1292e+01, -7.4626e+00,
-6.4464e+00, -1.9786e+01, -1.0472e+01, -6.0569e+00, -1.9208e+01,
-1.2765e+01, -7.2723e+00, -1.9674e+01, -1.9448e+01, -7.7463e+00,
-1.0923e+01, -1.4090e+01, -2.0662e+01, -9.1098e+00, -2.1290e+01,
-8.4933e+00, -1.9817e+01, -2.1026e+01, -1.9841e+01, -2.1070e+01,
-6.8339e+00, -1.2337e+01, -1.9485e+01, -2.0088e+01, -2.0572e+01,
-1.1442e+01, -1.3681e+01],
[-1.4552e+01, -5.6756e+00, -6.5101e+00, -6.8919e+00, -6.0353e+00,
-4.6946e+00, -7.1487e+00, -8.0668e+00, -5.0402e+00, -6.3317e+00,
-5.1300e+00, -7.0153e+00, -3.8860e+00, -6.6914e+00, -4.9960e+00,
-3.5453e+00, -2.9731e+00, -2.3850e+00, -1.9343e+00, -2.9652e+00,
-4.8756e+00, -9.2717e-01, -2.6973e+00, -3.4213e-01, -3.4659e-01,
-6.3965e-01, 1.0892e+00, 3.6887e-01, 3.8471e-01, 7.3655e-01,
1.1782e+00, 1.8273e+00, 1.1168e+00, 2.2465e+00, 7.9465e-01,
2.7679e+00, 1.1260e+00, 5.2878e-01, 1.9084e+00, 1.1220e+00,
1.7439e+00, -8.7711e-01, 8.4949e-01, 1.1180e+00, -7.6084e-01,
-5.9320e-01, -1.4923e-01, 3.3403e-01, -7.2173e-01, -1.5681e+00,
1.4223e-01, -1.4495e+00, -1.9116e+00, -2.3361e+00, -8.7268e-01,
-1.7656e+00, -1.3184e+00, -1.9016e-01, -1.7168e+00, -2.8503e+00,
-1.3222e+00, -1.6766e+00, -1.6877e+00, -3.0505e+00, -2.2229e+00,
-2.1420e+00, -2.3489e+00, -2.3364e+00, -3.3212e+00, -2.6197e+00,
-2.0843e+00, -5.5276e+00, -3.6150e+00, -4.8522e+00, -4.7064e+00,
-2.5211e+00, -2.6958e+00, -1.4266e+01, -7.6061e+00, -5.5877e+00,
-5.0405e+00, -5.1304e+00, -5.7737e+00, -5.1433e+00, -6.4522e+00,
-6.1288e+00, -9.2950e+00, -8.6693e+00, -1.0034e+01, -4.3842e+00,
-7.4522e+00, -1.4067e+01, -5.8612e+00, -3.6958e+00, -1.3594e+01,
-9.0726e+00, -6.8618e+00, -1.3105e+01, -1.4566e+01, -3.9558e+00,
-7.7858e+00, -7.6376e+00, -1.4513e+01, -9.9152e+00, -1.4694e+01,
-3.4533e+00, -1.3691e+01, -1.3588e+01, -1.4116e+01, -1.3777e+01,
-2.9833e+00, -5.9016e+00, -1.4990e+01, -1.3143e+01, -1.4430e+01,
-1.0271e+01, -1.0298e+01],
[-1.6682e+01, 8.5411e-01, 3.2389e+00, 2.7710e+00, 3.2007e+00,
3.7062e+00, 1.9179e+00, 2.1604e+00, 1.1266e+00, 1.8706e+00,
6.0794e-01, 4.0901e-01, 1.2694e+00, 4.0459e-01, 4.4555e-01,
-1.8943e+00, -4.0417e-01, -2.4116e+00, -2.7357e+00, -5.2745e+00,
-7.8073e+00, -2.9339e+00, -5.7495e+00, -5.0040e+00, -6.2846e+00,
-3.9105e+00, -3.7977e+00, -3.2278e+00, -5.1827e+00, -3.8533e+00,
-2.8924e+00, -1.7879e+00, -3.0827e+00, -4.5894e+00, -2.6311e+00,
-1.7025e+00, -1.8633e+00, -2.9210e+00, -2.0001e+00, -1.8138e+00,
-1.3665e+00, -4.2716e+00, -5.8459e+00, -3.0440e+00, -5.8071e+00,
-6.2883e+00, -5.2206e+00, -9.5440e+00, -3.6475e+00, -6.1591e+00,
-1.0592e+01, -4.3617e+00, -3.3642e+00, -5.3363e+00, -5.5787e+00,
-7.4249e+00, -7.1692e+00, -3.3315e+00, -3.9705e+00, -5.8977e+00,
-4.3143e+00, -6.7721e+00, -7.5227e+00, -2.8595e+00, -1.7640e+00,
-4.2186e+00, -3.0131e+00, -2.9117e+00, -2.5477e+00, -4.0523e+00,
-1.8275e+00, -4.5480e+00, -3.9839e+00, -2.8844e+00, -1.2190e+00,
-5.2249e+00, -7.7251e+00, -1.7195e+01, -4.9494e+00, -1.0878e+01,
-3.7370e+00, -7.2804e+00, -4.1636e+00, -3.5399e+00, -6.0598e+00,
-5.9881e+00, -5.0667e+00, -6.0516e+00, -5.8384e+00, -6.3401e+00,
-7.5850e+00, -1.6027e+01, -9.3376e+00, -1.5146e+01, -1.6669e+01,
-4.7354e+00, -7.9051e+00, -1.6468e+01, -1.5690e+01, -1.1599e+01,
-7.3489e+00, -6.6800e+00, -1.6429e+01, -1.0829e+01, -1.6690e+01,
-7.3186e+00, -1.6495e+01, -1.6617e+01, -1.6624e+01, -1.7161e+01,
-9.4183e+00, -1.0559e+01, -1.5795e+01, -1.7331e+01, -1.6139e+01,
-9.7606e+00, -7.7116e+00]], grad_fn=), ‘label2’: tensor([[ 2.1139, -2.6771],
[-3.7862, 3.9692],
[-1.0382, 0.0937],
[-1.3758, 0.0215],
[-2.1546, 1.3103],
[-3.4222, 3.5594],
[ 1.7055, -1.5535],
[-0.7628, -0.2231]], grad_fn=)}
like this. I’m not understanding how to correct my code. Thank you in advance.

Thanks for confirming. It seems your model returns a dict with two keys: label1 and label2.
I don’t know what these tensors represent but I would assume that one of them should represent the actual logits, which can be used to calculate the loss as well as the output classes.

Actually in my model label1 represent age labels and label2 represent gender labels.

If both are the labels, i.e. the targets, then I don’t quite understand why the model returns them in:

outputs_1 = net(image[:8].float())

as I would assume it should yield predictions/logits.

if I have removed torch.tensor from this

pred = torch.max((outputs_1), dim=1).data.numpy()
then it throws TypeError: max() received an invalid combination of arguments - got (dict, dim=int), but expected one of:

  • (Tensor input)
  • (Tensor input, Tensor other, *, Tensor out)
  • (Tensor input, int dim, bool keepdim, *, tuple of Tensors out)
  • (Tensor input, name dim, bool keepdim, *, tuple of Tensors out)
    this error.
    how to correct this error. Thank you in advance

Removing the torch.tensor call won’t fix the issue since you are still dealing with a dict.
I would recommend trying to figure out why your model is returning the dict and why it apparently contains targets instead of logits.

okay. I got it. Thank you. :+1:t2: