When running a model and then sampling from the output distribution in multiple process with the multiprocessing module, every sample is the same even though they should be random. What is the reason for the sample being identical? A simple working example is:

```
import multiprocessing
import torch
import torch.nn as nn
class test(multiprocessing.Process):
def __init__(self, weights):
multiprocessing.Process.__init__(self)
self.m = nn.Linear(10, 10)
self.m.load_state_dict(weights)
def run(self):
out = self.m(torch.ones(10))
dist = Categorical(logits=out)
print(dist.sample())
def main():
orig_model = nn.Linear(10,10)
weights = orig_model.state_dict()
workers = [test(weights) for i in range(100)]
for w in workers:
w.start()
for w in workers:
w.join()
```

with output

```
tensor(2)
tensor(2)
...
tensor(2)
tensor(2)
```