How to combine losses of individual dataset elements to perform mini-batch gradient descent

Hello there!
I was wondering what the correct way is to combine individual losses of elements of a dataset to perform mini-batch gradient descent.

I have an unsupervised loss function myLossFunction that can only process one element of the dataset at a time. So I would like to loop over the predictions and calculate the losses one by one.

  1. But how do I then combine the losses? Just torch.mean them?
  2. And is it required to call loss.backward() after calculating each individual loss or is it fine where it is in the example code?
dataloader = DataLoader(dataset, batch_size=10)

for batch in dataloader:
    predictions = model(batch)
    losses = []
    for predictions in predictions:
        losses.append(myLossFunction(prediction))

    # what do I do here to combine the losses?

    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

Cheers!

Calling mean should just work, although it seems surprising that the loss function cannot work on a batch of data. There might be some slowdown because of for loop overhead rather than computing the loss in a batched way.