Gradients are zero when calling autograd on torch.bernoulli()

Hi,

Yes this is expected.
It is specified here: https://github.com/pytorch/pytorch/blob/727463a727e75858809a325477ac2b62ccd08e7e/tools/autograd/derivatives.yaml#L270-L271

1 Like