I saw some code in a repo that was using torch.normal() with a possibly negative standard deviation. I thought the code was wrong and that torch.normal() would surely give an error with a negative standard deviation, but when I tried it, it just worked with the resulting sample being positive…
I had trouble finding the exact implementation in the pytorch library so I was unable to see the actual code to see what it is doing, it seems like it just takes the absolute value, but I am not sure. How does this work?
Any idea why the arguments set like this are valid? If I add a size argument to the above code it fails with an error. But without the size argument, it seems to work.
For what it’s worth, I can reproduce your and Doosti’s results on both
pytorch 1.6.0 and on 1.8.0.dev20201203. That is, I get the “normal_
expects std > 0.0” error when I call torch.normal() with the third (size)
argument, but not when I don’t.
I suppose you could say that it’s not a bug when software fails to issue
an error when the user does something wrong. However, pytorch is
generally pretty liberal about issuing invalid-argument errors, and here
we have the inconsistent reporting of the error based on exactly how
the function is called. So I would call it a bug.
Not that is matters, but as to what torch.normal() actually does:
It appears that torch.normal(), if effect, generates a normal deviate
(a sample from a normal distribution with mean zero and standard
deviation one), multiplies it by the passed-in sigma, and then shifts
it by the passed in mu. This is statistically the same as just using abs (sigma), but gives specific values that differ. Here is a quick
example:
Thanks for the detailed explanation @KFrank and checking the code @Nikronic! @jwillette could you create a GitHub issue so that we could make sure the methods raise the same error?