How to sample from normal distribution with torch::Tensor parameters in C++?

Hi everyone,

I am wondering how to sample from a Gaussian distribution with mean mu, and standard deviation std in C++. Ideally, I would like to do this given that mu and std are the outputs of a NN, hence, torch::Tensor objects, so that autograd can keep track of their operations.

Is there a method that satisfies these requirements?

In Python, I believe this is solved using torch.distributions.Normal.

Thank you.

2 Likes

I am facing the same prollem, using torch.distributions.Normal in C++.
help~

1 Like

Hi @cj0741!

Hopefully this will help. Based on this link, we can sample entries from a Normal distribution in Pytorch for C++ as follows:

auto sample = torch::randn({1})*sigma + mu;

which is similar to the implementation found here (I thank @glaringlee for having referred me to it a while ago).

Similarly, if you require to calculate the p.d.f., you could perhaps do it as follows, and also calculate the logprob with it after (see the implementation here):

auto pdf = (1.0 / (sigma * std::sqrt(2*M_PI))) * torch::exp(-0.5 * torch::pow((sample - mu) / sigma, 2));

Then, the logprob is:

auto log_prob = torch::log(pdf);

Feel free to ask any more questions!

1 Like

Super! that is exactly what I need.
I am new to pytorch C++, if there are more questions, I will come back to you.
Tanks~

1 Like

Hi @cruzas,
I also have problem with convert between Tensor and C++ values such as vector, is there method in C++ API can do the same things like “x_inp.data.numpy()” and “torch.from_numpy” in Python?
Thanks for your help.
Jin

Hi @cj0741,

I must admit I am no expert in Pytorch for C++. However, I do not know of a function in the API that does this conversion automatically.

The only way I am aware of to access the data from a Tensor object, which could be used to build a C++ vector manually, is as follows:

auto sample_data = *(sample.data_ptr<double>())

Where sample is a Tensor object, assuming that it is a tensor of type double.

Be careful though. If you want to, for example, use backpropagation on a neural network, the values that you create must be linked to Tensor objects. If you directly take the data from a Tensor object, and somehow manipulate it, the autograd graph will not keep track of those operations. Again, I admit I am no expert, but that is how I have understood it so far. The documentation here is probably clearer in explaining this: https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html.

Feel free to post this as its own question on the forum, perhaps someone else knows. I would also be interested in knowing if such a function exists. :slight_smile:

Hi @cruzas ,

Thank you for your help!