Does anyone have an example of how to set up a model of *n* bernoulli random variables? I want to sample billions of bernoulli random variables and then update them with something like NLL loss against an array of billions of 1’s and 0’s.

Something like this, but scales better. If *n* is so large (1e6 x 1e6 matrix) how might I structure this problem?

```
sample = np.array([[ 1., 1., 0., 1.],
[ 0., 1., 1., 1.],
[ 0., 0., 1., 1.],
[ 1., 0., 0., 1.]])
x = Variable(torch.from_numpy(sample)).type(torch.FloatTensor)
n = 4
ps = torch.rand((n,n), requires_grad=True)
learning_rate = 2e-4
for _ in range(num_epochs):
NLL = -torch.sum(torch.log(x @ p + (1-x) @ (1-p)) )
NLL.backward()
p.data -= learning_rate * p.grad.data
p.grad.data.zero_()
```