This is the way I found works:
# generating uniform variables
import numpy as np
num_samples = 3
Din = 1
lb, ub = -1, 1
xn = np.random.uniform(low=lb, high=ub, size=(num_samples,Din))
print(xn)
import torch
sampler = torch.distributions.Uniform(low=lb, high=ub)
r = sampler.sample((num_samples,Din))
print(r)
r2 = torch.torch.distributions.Uniform(low=lb, high=ub).sample((num_samples,Din))
print(r2)
# process input
f = nn.Sequential(OrderedDict([
('f1', nn.Linear(Din,Dout)),
('out', nn.SELU())
]))
Y = f(r2)
print(Y)
but I have to admit I don’t know what the point of generating sampler is and why not just call it directly as I do in the one liner (last line of code).
Comments:
- sampler are good for it’s so you can transform/compose/cache/etc distributions. see https://arxiv.org/abs/1711.10604, and the top of the docs of https://pytorch.org/docs/stable/distributions.html# and https://arxiv.org/abs/1506.05254
- you can feed in tensors to uniform to let it know the high dimensional interval (hypercube) to generate the uniform samples (that’s why it receives tensors as input rather than simply numbers)
Reference: