Different output tensor size for torch distributions

I’m wondering why there is a difference in the size of the output tensors for different torch.distributions?

import torch

probabilities = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
dist1 = torch.distributions.Categorical(torch.tensor(probabilities))

dist2 = torch.distributions.beta.Beta(torch.tensor([2.0]), torch.tensor([2.0]))

distGen1 = lambda: dist1.sample([2])
distGen2 = lambda: dist2.sample([2])


print("size: ", distGen1().size())   #torch.Size([2])
print("size: ", distGen2().size())   #torch.Size([2, 1])

One can use torch.squeezed for instance to make them similar size-wise.

1 Like