Backward for negative log likelihood loss of MultivariateNormal (in distributions)

I was trying to minimize the negative log likelihood of a multivariate normal wrt to the mean 1D and covariance matrix
but it gives me an error

m = torch.distributions.normal.Normal(mu, C)  
# mu is Variable of size(n, ) 
# C is Variable  of size (n, n)
loss = -m.log_prob(x) # x is a variable of size (n, )
# above line gives an error

I tried to sample from this distribution but it gives sample with size (n, n) not a (n, ) vector

 m.sample()

I’m stuck here.
I want to get gradients wrt to mu and Cov. Any idea what can I do

Normal is a batched univariate distribution. Your mu is being broadcast up to the same shape as C, producing an n,n batch of univariate normals. If you want a MultivariateNormal distribution, use

n = 5
mu = torch.zeros(n)
C = torch.eye(n, n)
m = torch.distributions.MultivariateNormal(mu, covariance_matrix=C)
x = m.sample()  # should have shape (n,)
loss = -m.log_prob(x)  # should be a scalar
1 Like

Hi @fritzo is the MultivariateNormal also a batched class now? if i give it a mean tensor of shape batch_size x 2 and covar of shape batch_size x 2 x 2 will it consider the input as batched?

Yes, the result will be batched.