Hi, I want to do something similar to this:
mu = torch.zeros(5, 2)
sd = torch.ones(5)
torch.normal(mu, sd)
But I get RuntimeError: inconsistent tensor size
.
I noticed in the 1d case it works:
mu = torch.zeros(5, 1)
sd = torch.ones(5)
torch.normal(mu, sd)
Sorry for the spam, sd = torch.ones(5, 2)
works, so we could do sd.repeat(1, 2)
if sd
is one-dimensional.
Or torch.cat([sd, sd], 1)
if sd
is a Variable
, since repeat
is not supported by Variable
.
apaszke
(Adam Paszke)
3
Actually, if you want the mean/std to be the same for all samples, you can just pass a number to torch.normal
.
Good call. The use case I had was a network outputting gaussian parameters mu
in N x D and log_sd
in N x 1, so the example above was a bit off.
After doing some more searching, I think that using expand_as
might be the most efficient? To summarize:
mu = Variable(torch.zeros(5, 2))
sd = Variable(torch.rand(5, 1))
torch.normal(mu, sd.expand_as(mu))
1 Like
apaszke
(Adam Paszke)
5
Yes, expand will be much better! There will be no memory copy in this case.
1 Like