If I have a tensor outputted by the neural network, representing the score of each category. I want to get two types of sampling results,：
In the first case, the higher the value, the more likely it is to be sampled;
In the second case, the lower the value, the more likely it is to be sampled.
I can easily apply torch.multinomial() in the first case, but how can I get what I want in the second case?

For example, I have a tensor [0.1, 0.3, 0.4, 0.0, 0.7, 0.0, 0.2] and I want each case have 2 samples to be drawn, then:
In the first case, [0.7, 0.4] are most likely to be drawn, and in the second case, [0.1, 0.2] are most likely to be drawn.
Noticing that all the scores are from [0, 1] and if the score is 0, we want to mask it, that means this score shouldn’t be drawn in any case.

In the first case your proposed use of torch.multinomial()
is fine.

Based on your statement that “all the scores are from [0, 1],”
you can flip and mask your scores as follows:

scores = scores.sign() * (1.0 - scores)

If (before flipping) scores is exactly 0.0, scores.sign()
will return 0.0 (otherwise 1.0). After flipping, 0.0 becomes 1.0,
but gets set back to 0.0 when multiplied by the 0.0 from sign().

You can now feed your flipped, masked scores to torch.multinomial()
(or torch.distributions.categorical.Categorical),
and the masked entries will never get sampled because you’ve
specified a probability of 0.0.