Loss function for Floating targets

Yes, i think this might solve my problem.
I have to pass direct probabilies generated from a model ( the big one ) and after doing softmax on it to the loss function , where both are of the same size .

>>> def softXEnt (input, target):
...     return  -(target * input).sum() / input.shape[0]
...
>>> input = torch.rand( 2, 5 )
>>> target = torch.rand( 2, 5 )
>>> input = torch.nn.functional.log_softmax( input , dim =1 )
>>> target = torch.nn.functional.softmax( target , dim =1 )
>>> softXEnt( input , target )
tensor(1.6736)
>>> torch.nn.KLDivLoss(reduction='batchmean')(input ,target )
tensor(0.0981)

Since the results are different from KL divergence loss i think it might be actually helpful for my use case . Thanks .