It doesn’t seem like there’s any diagonal normal Distribution. Is there a reason for this? I would personally find it extremely handy since I often needing diagonal normals. Using Normal doesn’t suffice for me since I want things with event_shape = [d].
And I should add that it’s a real pain to go from [batch_size, dim] to [batch_size, dim, dim], expanding out diagonal matrices tensor[i, :, :].
you can do this
>>> result = torch.zeros(2, 4, 4)
>>> sigma = torch.tensor([0, 0.01, 1, 10])
>>> result.diagonal(dim1=1, dim2=2).normal_().mul_(sigma)
tensor([[ 0.0000, -0.0064, 0.6829, -2.7818],
[ 0.0000, 0.0084, -1.2700, 10.5473]])
>>> result
tensor([[[ 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, -0.0064, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.6829, 0.0000],
[ 0.0000, 0.0000, 0.0000, -2.7818]],
[[ 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0084, 0.0000, 0.0000],
[ 0.0000, 0.0000, -1.2700, 0.0000],
[ 0.0000, 0.0000, 0.0000, 10.5473]]])
To clarify: I’m looking for a MultivariateNormal Distribution but with a diagonal covariance matrix. Just sampling is not the issue: I also need log_prob, entropy, and so on.
Just created https://github.com/pytorch/pytorch/pull/11178.