There is a KL divergence function registered for
KL(Uniform || Normal), but not for
KL(Uniform || MultivariateNormal). Is there any fundamental reason for this, or it just hasn’t been implemented yet?
KL(Normal || Uniform) and
KL(MultivariateNormal || Uniform) are infinite since Uniform will have bounded support and you’ll divide by zero anywhere outside that region. I’m just trying to understand if there’s anything that should prevent the
KL(Uniform || MultivariateNormal) case. I haven’t tried working through the math yet, I also haven’t found any good references on these derivations. Any pointers are appreciated.