I was wondering what the correct way is to combine individual losses of elements of a dataset to perform mini-batch gradient descent.
I have an unsupervised loss function myLossFunction that can only process one element of the dataset at a time. So I would like to loop over the predictions and calculate the losses one by one.
- But how do I then combine the losses? Just torch.mean them?
- And is it required to call loss.backward() after calculating each individual loss or is it fine where it is in the example code?
dataloader = DataLoader(dataset, batch_size=10) for batch in dataloader: predictions = model(batch) losses =  for predictions in predictions: losses.append(myLossFunction(prediction)) # what do I do here to combine the losses? optimizer.zero_grad() loss.backward() optimizer.step()