How to convert a Normal variable into a regular Variable that can be inputted to a loss function

how to change a Normal variable, which has been created via the torch.distributions.Normal function into a regular Variable that can be fed into a loss function.

Try something like this:

m = Normal(torch.Tensor([0.0]), torch.Tensor([1.0]))
var = Variable(m.sample())

thank you for that richard, however i am still having a problem so what i am trying to do is to take a gaussian distribution over the output of my network and then to take a random sample from this distribution as use that in the loss funciton, however when i try do to this i get this error

variables, grad_variables, retain_graph)
RuntimeError: element 0 of variables does not require grad and does not have a grad_fn

i tried to turn the normal variable into a Variable but i cant seem to do this any help is greatly appreciated

Even if the distribution parameters are outputs of a network, sampling is not a mathematically differentiable operation (it is not even a function). Depending on your use case, you might want to look into divergence metrics like KL or use the reparametrization trick from VAE by inputing a random noise to the network.