I currently have a 2 tensors of size ([N, D]) and ([N, D, D]). The first one represents a N means and the second represents N covariance matrices. Currently I am having to create N different Multivariate Gaussians in a for loop and then evaluating the log probability of some data which has size ([X, D]) on each MV Gaussian. Is there a way to vectorize this process at all by passing all the means and all the covariances at once into the Multivariate function and then having a single log_prob evaluation which output a vector with size ([X])?

Wouldn’t this directly work using `MultivariateNormal`

:

```
loc = torch.stack((torch.zeros(2), torch.ones(2)+100., torch.ones(2)+1000.))
m = torch.distributions.multivariate_normal.MultivariateNormal(loc, torch.eye(2)[None])
m.sample()
# tensor([[-8.6749e-02, -1.2250e-01],
# [ 1.0310e+02, 1.0090e+02],
# [ 1.0028e+03, 1.0015e+03]])
```

Hi @ptrblck .

Yes you are absolutely correct, I realised there were some mistakes in my code last night when I was trying to do this as I was summing across the wrong axis to produce the quantity I needed. This caused me to have an output tensor which is not the shape I expected it to be. Apologies, I should have come back and noted this after I found out.