torch.distributions.kumaraswamy.Kumaraswamy generates samples outside its support

The Kumaraswamy distribution is defined over the open interval (0,1) but torch.distributions.kumaraswamy.Kumaraswamy can generate samples that equal 1. This causes “nan” outputs when computing the log_prob of such samples.

import torch as ch
import torch.distributions as dist
kumaraswamy = dist.kumaraswamy.Kumaraswamy(ch.Tensor([0.2]), ch.Tensor([0.2]))
samples = kumaraswamy.sample(ch.Size([1000000]))
assert len(samples[samples==1.0]) > 0

I suspect this happens to due to the use of torch’s Uniform distribution, which is defined on the half-open interval [0,1), in the sample() method: torch.distributions.transformed_distribution — PyTorch 1.13 documentation

Since torch.distributions.uniform.Uniform can generate samples that equal 0, these are perhaps transformed to samples that equal 1 for the Kumaraswamy distribution, outside its support.

Hi Kiranchari!

I think that it is legitimate to expect that pytorch’s Kumaraswamy not return
samples that are equal to 1.0 (nor 0.0).

I can reproduce this issue running your code.

Furthermore, Kumaraswamy (0.1, 0.1) generates samples of 0.0 as well
as 1.0.

(Note, performing the analogous test with Beta does not produce samples
outside of its open-interval, (0, 1) support.)

I would consider this a legitimate bug – could you file a github issue?

(@ptrblck, perhaps an expert could chime in.)

This doesn’t seem likely because your test with “only” 1,000,000 samples is
quite unlikely to sample an exact 0.0 from Uniform.

Note that floating-point numbers can get closer to 0.0 than to 1.0. I expect
that samples from Uniform that are close to 0.0 (but not equal) get transformed
to 1.0 (and Uniform samples that are close to 1.0 get transformed to 0.0).

Best.

K. Frank

Thanks for pinging on this issue!
I assume this Uniform usage might create these invalid values?
@kiranchari As @KFrank mentioned, would you mind creating an issue on GitHub so that we could track and fix it, please?

CC @vishwakftw for viz

Thanks @ptrblck and @KFrank. I have created an issue on Github: torch.distributions.kumaraswamy.Kumaraswamy generates samples outside its support (0,1) · Issue #95548 · pytorch/pytorch · GitHub

I found the same bug and lately discovered, that this topic was already open.
What is the status?
I’m using pytorch 2.5 and the Kumaraswamy distribution still has that bug: sampling from it I can get 0, which should not be.

Some of this behavior can also be due to underflow in the sampling computation. See Stabilizing the Kumaraswamy Distribution · Issue #139019 · pytorch/pytorch · GitHub where I am attempting to fix many of the bugs in the current Kumarswamy implementation.

Thanks,

but at the end I ended up by implementing the Kumaraswamy distribution completely from scratch , ignoring the pytorch version, and it works perfectly.

It seems that I’m not the only one experiencing problem with the pytorch version of the kumaraswamy distribution. I found this paper: https://arxiv.org/pdf/2410.00660 and they use the exact pytorch version, which is known to be unstable, for their study.

Hi Dave,

That is my paper, it’s linked in the original pull request I posted above. We find all common implementations have unresolved numerical issues in the relevant distribution-related functions used for probabilistic inference. We provide a stable Kumaraswamy implementation, which I’m trying to integrate into PyTorch.

:rofl:
I did not see, that your name here the same is on the paper.
But you are g**damn right. KM is strangly under-unused.

By the way, I couldn’t find the code you mention in the paper, because there is not link to any repository. In the link you posted above, I’m not sure how to get to right place.

Have you the link to your repository somewhere?

The Kumaraswamy was strangely under-used. But hopefully this paper explains why!

I haven’t put up a Github repo yet. I’m planning to post one upon the paper’s acceptance to the journal I’m submitting it to.

For now, the code is available in the Arxiv submission.

The paper ‘Stabilizing the Kumaraswamy Distribution’ is now accepted in TMLR: Stabilizing the Kumaraswamy Distribution | OpenReview

The paper’s associated Github Repo now also public: GitHub - maxwass/stabilizing-the-kumaraswamy-distribution