Torch.rand_like only generating positive numbers

I have tested the function on different devices and it always comes to the same conclusion: torch.rand_like is generating only positive values, and I don’t understand why, because this function should be generating numbers from a normal distribution. I get even more confused when consulting the Pytorch docs, quoting:

torch.randn_like(input) is equivalent to torch.randn(input.size(), dtype=input.dtype, layout=input.layout, device=input.device)

The funny thing is that torch.randn does generate negative numbers, which is correct with the fact that it is sampling from a normal distribution too.

Has someone else noticed this before? Or, does anybody know if this is an error? Perhaps I am the one who is confused, and if this is the case, I would appreciate if someone explained to me why.

Additionally, I’ve seen a lot of implementations of Diffusion(ddpm) and VAE with this function; doesn’t this affect the correct functioning of the model? Because both networks should be sampling from a normal distribution, not only the positive part.

Could you provide more details like the input you used for the randn_like ?

I have not been able to reproduce the bug you have described:

import torch
a = torch.randn(10)
torch.randn_like(a)

The snippet above gave me:
tensor([ 0.8246, 0.6652, 0.4302, 0.4329, 1.3249, 0.7312, -0.2145, 0.9759, -1.8222, 0.3011])
which holds negative numbers (-0.2145, and -1.8222)

My pytorch version: ‘1.13.1+cu117’ and Python: 3.10.12

May be your input is an uint and therefore give you only uint output (so only positive ?)

That was the first thing that I tested. I am doing a similar test as the one you posted:

import torch
t1 = torch.randn( ( int(1e6), ) )
t2 = torch.rand_like(t1)
print( any(t1<0), any(t2<0) )

And this gave me the output: (True, False). rand_like is receiving a torch.float32 as t1 has been produced by randn. When making the histogram of the numbers generated is even stranger:


Clearly, the rand_like function is sampling from a uniform (0,1), which doesn’t make sense based on its definition in docs.

I ran these tests on 2 different versions:

  • Python 3.10.11 Pytorch: 2.1.0+cpu
  • Python: 3.10.9 Pytorch: 1.13.1+cu117

My bad, I was using torch.rand_like, no torch.randn_like. :face_with_peeking_eye:

1 Like