As I am exploring the use of Pytorchs autograd functionality in a topic not related to neural networks, this may be a rather unusual question.

I desire to apply Pytorchs autograd functionality in a Monte Carlo estimation. I therefore need to simulate a large sample of Wiener processes and - in my case - store all the intermediate values in a 2-dim tensor.

So I want to do something like this, but without the in-place operation, as it ruins the back propagation.

```
import torch
spot = torch.tensor(30.)
vol = torch.tensor(.2)
expiry = torch.tensor(1.)
r = torch.tensor(.06)
n_steps = 50
n_paths = 1000
dt = (expiry / n_steps)
paths = torch.empty([n_steps,n_paths])
paths[0] = spot
for i in range(1, n_steps):
rand_norm = torch.randn(n_paths)
paths[i] = paths[i-1] * torch.exp((r - 0.5 * vol * vol) * dt + torch.sqrt(dt) * vol * rand_norm)
print(paths)
```

Does anyone have any suggestions on how to do this in an efficient way without the need of in-place operations?

Any help is much appreciated. Thanks in advance!