Cross_entropy_loss(): argument 'input' (position 1) must be Tensor, not collections.OrderedDict

hi friends
I was trying to execute a simple code and encountered this error while calculating the loss . Can anyone tell me the code problem? Thanks

from torchvision.datasets import Cityscapes

transform = transforms.Compose([
    transforms.Resize([256,256]),
    transforms.ToTensor()
])

trainset =  Cityscapes(root='/cityscape', split='train', mode='fine', target_type='semantic', transform=transform, target_transform=transform)
valset =  Cityscapes(root='/cityscape', split='train', mode='fine', target_type='semantic', transform=transform, target_transform=transform)
trainloader = torch.utils.data.DataLoader( trainset, batch_size=2,  shuffle=True, num_workers=0)
valloader = torch.utils.data.DataLoader( valset, batch_size=2,  shuffle=True, num_workers=0)

from torchvision import models
model = models.segmentation.deeplabv3_resnet101(pretrained=False,progress=True,num_classes=19)
model = model.to(device)

optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
criterion = nn.CrossEntropyLoss()

def train_model(model, dataloaders, criterion, optimizer, num_epochs):

    since = time.time()
    val_acc_history = []

    best_model_wts = copy.deepcopy(model.state_dict())
    best_acc = 0.0

    for epoch in range(num_epochs):

        # Each epoch has a training and validation phase
        for phase in ['train', 'val']:
            if phase == 'train':
                model.train()  # Set model to training mode
            else:
                model.eval()   # Set model to evaluate mode
            running_loss = 0.0
            running_corrects = 0
            for inputs, labels in dataloaders[phase]:

                inputs = inputs.to(device)
                labels = labels.to(device)

                # zero the parameter gradients
                optimizer.zero_grad()

                # track history if only in train
                with torch.set_grad_enabled(phase == 'train'):

                    outputs = model(inputs)
                    loss = criterion(outputs, labels)
                    _, preds = torch.max(outputs, 1)

'''
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing)
   2844     if size_average is not None or reduce is not None:
   2845         reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2846     return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
   2847 
   2848 

TypeError: cross_entropy_loss(): argument 'input' (position 1) must be Tensor, not collections.OrderedDict
'''

your model must be returning a dictionary that you need to use part of it maybe, I dont know the model so I can not speculate about the return of it, could you print output.keys() and send it here?

Thank you for your attention. I’m not sure you mean this or not. But when I apply this :

outputs = model(inputs)
print(outputs.keys())

I get this result :

/usr/local/lib/python3.7/dist-packages/torchsummary/torchsummary.py in hook(module, input, output)
     24                 ]
     25             else:
---> 26                 summary[m_key]["output_shape"] = list(output.size())
     27                 summary[m_key]["output_shape"][0] = batch_size
     28 

AttributeError: 'collections.OrderedDict' object has no attribute 'size'

change the line:

outputs = model(inputs)

for:

outputs = model(inputs)['out']

should be working.

1 Like