How do I make a residual net with noise not in place?

Hi, each layer should be calculated from the last layer as
next_layer = last_layer + Relu( Linear( last_layer ) ) + noise
Each layer has the same input dim as output dim.
This is what I did

class MyNN(nn.Module)
    def __init__(self, space_dim):
        super(MyNN, self).__init__()
        self.layers = nn.ModuleList()
        for i in range(self.path_length-1):
            linear_layer = nn.Linear(space_dim, space_dim)
            self.layers.append(nn.Sequential(linear_layer, nn.LeakyReLU(0.2, inplace=True)))

    def forward(self, z):
        for i in range(self.path_length-1):
            z += self.layers[i](z)
            noise = (self.noisefactor * torch.normal(0,1,z.size())).detach()
            z += noise
        return z

I am getting one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1000, 1]], which is output 0 of AddBackward0,. In the stacktrace this happens when executing a linear layer. I guess that is because I am not properly saving the inbetween values of z. Should I just throw them into some list? Or preallocate a tensor where I save them to?

Could you remove the inplace operation and check, if this would solve the error?

    def forward(self, z):
        for i in range(self.path_length-1):
            z = z + self.layers[i](z)
            noise = (self.noisefactor * torch.normal(0,1,z.size())).detach()
            z = z + noise
        return z