Loss function that optimises uncertainty from MCdropout

Hello,

I have a question regarding MC dropout. I understand that we can estimate uncertainty by taking repeated forward passes at test time with dropout enabled and grad turned off.

My question is, what if we wanted to use a loss function that considers uncertainty… How do we ensure that the uncertainty calculated at test time is optimized in the loss function?

Thank you

Here’s how I have calculated the uncertainty

with torch.no_grad():
    for _ in range(100):
        mc_y_pred = model(X_train)
        mc_predictions.append(mc_y_pred)

mc_predictions = torch.stack(mc_predictions)
sigma = torch.std(mc_predictions, dim=0)

#Here y_pred could be the mean of mc_predictions
y_pred = model(X_train_batch)

loss = some_loss_func(y_true, y_pred, sigma) 
##backwards pass
optimizer.zero_grad()
loss.backward()
#update weights
optimizer.step()