Bug in-place sampling in bernouilli distribution

Hello.

I think there is a bug in in-place bernoulli sampling. I put here the code that check for that. The code samples using in-place and non in-place mode.

import torch
import numpy

print "----BERNOULLI----"
torch.manual_seed(seed=1)
torch.cuda.manual_seed(seed=1)
a=torch.zeros((10,))
print a.bernoulli_().numpy()
a=torch.zeros((10,))
print a.bernoulli_().numpy()

torch.manual_seed(seed=1)
torch.cuda.manual_seed(seed=1)
a=torch.zeros((10,))
print a.bernoulli_().numpy()
a=torch.zeros((10,))
print a.bernoulli_().numpy()

print "--------------------------"
torch.manual_seed(seed=1)
torch.cuda.manual_seed(seed=1)
a=torch.zeros((10,))
print torch.bernoulli(a).numpy()
print torch.bernoulli(a).numpy()

torch.manual_seed(seed=1)
torch.cuda.manual_seed(seed=1)
print torch.bernoulli(a).numpy()
print torch.bernoulli(a).numpy()

print "----NORMAL----"
torch.manual_seed(seed=1)
torch.cuda.manual_seed(seed=1)
a=torch.zeros((10,))
print a.normal_().numpy()
a=torch.zeros((10,))
print a.normal_().numpy()

torch.manual_seed(seed=1)
torch.cuda.manual_seed(seed=1)
a=torch.zeros((10,))
print a.normal_().numpy()
a=torch.zeros((10,))
print a.normal_().numpy()

print "--------------------------"
torch.manual_seed(seed=1)
torch.cuda.manual_seed(seed=1)
a=torch.zeros((10,))
print torch.normal(a).numpy()
print torch.normal(a).numpy()

torch.manual_seed(seed=1)
torch.cuda.manual_seed(seed=1)
print torch.normal(a).numpy()
print torch.normal(a).numpy()

I think, there is no sufficient documentation available for the APIs. According to the code here, the probability is taken as 0.5 in case if there is no parameter provided for p.

If you change the code as below, it seems to be giving same functionality as non in-place operator.

import torch
import numpy

torch.manual_seed(seed=1)
torch.cuda.manual_seed(seed=1)
a=torch.zeros((10,))
print(a.bernoulli_(a).numpy())

Note the parameter passed to bernoulli_(),

2 Likes

Yes, I agree with you. I think documentation should be more clear as if we follow torch.benoulli() docs it seems we fill vector “a” with probabilities taken from that vector, or at least that is what I understood and that is how it works torch.normal()

Agreed, both the in-place and non in-place version needs arguments for the probabilities, which in not clear in the doc