Ah OK, I see. The docs currently say weights
should be a sequence, but maybe we should add some more information on the shape.
What happened is, that the additional dimension treats the weights
as different distributions:
weights = torch.empty(10).uniform_()
print(torch.multinomial(weights, 10, True))
> tensor([6, 6, 6, 0, 4, 2, 4, 5, 6, 6])
weights = torch.empty(10, 1).uniform_()
print(torch.multinomial(weights, 10, True))
> tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
So basically you received just the 0th sample a lot of times, since each weight row has only one value.