Homoscedastic uncertainty loss implementation

I am trying to implement the Homoscedastic uncertainty loss from Geometric Loss Functions for Camera Pose Regression with Deep Learning.

class HomoLoss(nn.Module):
	def __init__(self):
		super(HomoLoss, self).__init__()
		self.sx = nn.Parameter(torch.tensor(0.0))
		self.sq = nn.Parameter(torch.tensor(-3.5))
		self.criterion = nn.MSELoss(reduction = 'mean')
	def forward(self, out, gt):
		rot_loss = self.criterion(out[:, 3:], gt[:, 3:])
		trans_loss = self.criterion(out[:, :3], gt[:, :3])
		loss = self.learned_loss(trans_loss, rot_loss)
		#print("SX: {} and SQ: {} values are".format(self.sx, self.sq))
		return loss

	def learned_loss(self, trans_loss, rot_loss):
		return (-1*self.sx).exp()*trans_loss + self.sx + (-1*self.sq).exp()*rot_loss + self.sq

is this the correct way to implement and my loss is always in negative, is it behaving as properly?

Any leads would be appreciated !!!

@albanD can you help me out with this. When i start training with values sx, sq = 0.0, -3.0 as suggested by the author my loss is diverging to negative.

epoch1-------[02:56<00:00,  3.62it/s, average_loss=-18.351]
epoch2-------[02:54<00:00,  3.66it/s, average_loss=-22.009]
epoch3-------[02:54<00:00, 4.55it/s, average_loss=-23.185]

Is it how it is supposed to work?


I am not familiar with this loss function I’m afraid. Is there any reason why you can’t use the builtin one: GaussianNLLLoss — PyTorch master documentation ?

1 Like

Thanks @albanD. I am totally unaware of this pytorch loss. It is similar to what i am trying to implement. It is natural for this loss to converge in negatives?

I am not a specialist into what it is doing but it seems to be doing homoscedastic assumption as well. So I though it might be related.

1 Like