KL distance for Gaussian Mixture Model

I want to calculate KL divergence between multivariate Gaussian Mixture (GMM) , with its paramter list such as weight, mean, covariance given as Tensor Array. Is there already an avaliable implementation ?


what about the KL divergence loss ?

criterion = nn.KLDivLoss()

see the documentation here.

Thanks for your reply. But that not for Mixture of gaussians I think.

Calculating the KL between two MoGs is not tractable.

I know that, I was just asking if someone has an implemented approximation for calculating this KL divergence.

Check if this could help you https://github.com/yaqiz01/cs229-invoice-recognition/blob/e198cfc337003df1c1aa4aaa998f8082ce95dc51/bin/experiments#L477

Refererence: http://cs229.stanford.edu/proj2016/report/LiuWanZhang-UnstructuredDocumentRecognitionOnBusinessInvoice-report.pdf