Hi; I have a trainable parameters that parameterized exponential distribution; however, when I generate samples from the distribution, the generated samples has no grad_fn. Hence, the backward can’t bakcpropage through my samples.
>>> x
tensor([1.], grad_fn=<ExpBackward>) # it's a trainable parameters can be back-propagated
>>> torch.distributions.exponential.Exponential(x).sample()
tensor([0.3613]) # lost its grad_fn
is there any way to fix it ?