How to know if gradients backpropagate through a pytorch function?

For the pytorch functions:

  • torch.special.erf() which is used in torch.distributions.normal.Normal().cdf()
  • torch.special.erfinv() which is used in torch.distributions.normal.Normal().icdf()
  • torch._standard_gamma() which is used in torch.distributions.gamma.Gamma().rsample()

I’m wondering if gradients can actually backpropagate through these? It appears that they each have their respective grad_fn’s, however, for something like torch._standard_gamma(“parameter”), I thought this was a completely stochastic node (statistically I’m not sure how you could reparametrize this), and so I thought can’t backpropagate through “paramete”.

As a more general question, I’m curious on how to know if gradients can backpropagate through a Pytorch function?

I appreciate any help! I’m doing some augmentations with VAEs for my thesis, so I want to make sure this works.

this post is very old but for anyone working with gradients for the gamma distribution in the future, here is code that verifies indeed the function has a valid backward pass:

alpha = torch.tensor([2.0, 3.5], requires_grad=True)
x = torch._standard_gamma(alpha)
print(x.grad_fn) `