Convex combination of different model probabilities

Hi all,

I want to take a convex combination of the final probabilities from 2 different models in a classification problem.
I want to do something like this

logits1 = model1(input)
logits2 = model2(input)

prob1 = F.softmax(logits1, dim=1)
prob2 = F.softmax(logits2, dim=1)

final_prob = beta*prob1+(1-beta)*prob2
log_prob = torch.log(final_prob)

loss = torch.nn.NLLLoss(log_prob, target)

Is there any faster or a computationally more stable way to do this?