Model(images, targets) is returning a list of dictionaries

This is my first time using PyTorch (sorry if I picked the wrong topic listing), and I’ve ran into a strange issue when programming the validation phase of my model. Here’s the code snippet:

with torch.no_grad():  
    for images, targets in validation_dataloader:
        # Forward pass

        loss_dicts = model(images, targets)  
        batch_losses = [sum(loss_dict.values()) for loss_dict in loss_dicts]
        total_loss = sum(batch_losses) / len(batch_losses)  

        total_val_loss += total_loss

model(images, targets) is returning a list of dictionaries, each dictionary in the list corresponds to an image in the batch. This is not the understanding I had of the function, the understanding of the function I had is that it would return information about loss. Was what I read wrong, or is there anyway I can fix this?

If needed, I’d be more than happy to provide any other context.

You’re using the model function wrong, to get the loss you have to define a loss functions (aka criterion) like so:

criterion = nn.CrossEntropyLoss()

then you pass only the image throught the model and compare the output with the correct label using the criterion:

y_pred = model.forward(X_train)
loss = criterion(y_pred,y_train)

But it works how I hoped during the training phase:

    model.train()  # Set the model to training mode

    for images, targets in training_dataloader:  
        optimizer.zero_grad()  
        loss_dict = model(images, targets)  
        print(type(loss_dict))
        losses = sum(loss for loss in loss_dict.values())  

        losses.backward()  
        optimizer.step()  

    print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {losses.item()}')