Truncated normal

How to sample from a truncated normal distribution in PyTorch? I could not find any within Is it not implemented? Any workarounds?

This post might be the easiest approach.
In the topic were several methods discussed, so another one might fit your use case better.

I put together a truncated normal distribution class, implementing torch.distributions.Distribution interface:

I wanted to follow up on this as I am looking rsample from a truncated Gaussian in PyTorch and compute log_prob and wanted to see if there were any updated developments. @anton’s module that he shared is very helpful, but unfortunately, I am looking for a solution that is CUDA-capable. Thank you for your help.

It is CUDA-capable now

Yes, made the post before our email exchange. Thanks again for your help!

Is it also differentiable in the parameters?
Why not add this to PyTorch?

I am looking for a way to sample the truncated normal that is differentiable in the parameters of the distribution. That post doesn’t cover automatic differentiation, as far as I can see.

Has anyone got this working in a similar fashion to Tensorflow’s tensorflow.distributions.TruncatedMultivariateNormal?

See here tfp.distributions.MultivariateNormalTriL  |  TensorFlow Probability.

For my application I specifically need to specify the covariance and location of the mean, so something more detailed than a standard unit normal.

I understand it may be possible to sample from an un-truncated multivariate normal, multiply it by the covariance, then truncate, but I would rather a simple and proper way of doing it. Thanks all!