I have tested the function on different devices and it always comes to the same conclusion: torch.rand_like is generating only positive values, and I don’t understand why, because this function should be generating numbers from a normal distribution. I get even more confused when consulting the Pytorch docs, quoting:
torch.randn_like(input) is equivalent to torch.randn(input.size(), dtype=input.dtype, layout=input.layout, device=input.device)
The funny thing is that torch.randn does generate negative numbers, which is correct with the fact that it is sampling from a normal distribution too.
Has someone else noticed this before? Or, does anybody know if this is an error? Perhaps I am the one who is confused, and if this is the case, I would appreciate if someone explained to me why.
Additionally, I’ve seen a lot of implementations of Diffusion(ddpm) and VAE with this function; doesn’t this affect the correct functioning of the model? Because both networks should be sampling from a normal distribution, not only the positive part.
And this gave me the output: (True, False). rand_like is receiving a torch.float32 as t1 has been produced by randn. When making the histogram of the numbers generated is even stranger: