Calculating KL divergence between two Gaussians with .distributions package

I’m looking to estimate the KL divergence using monte carlo sampling. When I do the non MC version I get excellent results. When I replace with the MC version, I get bad results.

Anyone know what I’m doing wrong?

        z_mu, z_var = self.enc(x)
        
        # ---------
        # sample Z
        # ---------
        # init likelihood and prior
        std = torch.exp(z_var / 2)

        # Normal likelihood
        Q = torch.distributions.normal.Normal(z_mu, std)

        # Normal(0, 1) prior
        P = torch.distributions.normal.Normal(loc=torch.zeros_like(z_mu), scale=torch.ones_like(std))

        # sample Z
        z = Q.rsample()

        # KL div
        qz = Q.log_prob(z)
        pz = P.log_prob(z)

        kl_loss = torch.mean(qz - pz)

Is not it a dimension problem ? When you apply the log_prob function, would not you do it to each sample ?

what do you mean? isn’t every z independent from every other z for each batch item?