Hello,

I have two linear layers (`wl1`

and `wl2`

) and a ReLU layer between them. I want to reorder the neurons of `wl1`

without changing the result of a forward-pass. My function is:

```
def WSort(wl1, wl2, sorted_indices):
new_fc1 = wl1.weight.data.clone()
new_b1 = wl1.bias.data.clone()
new_fc2 = wl2.weight.data.clone()
for i in range(n):
j = sorted_indices[i]
new_fc1[i] = wl1.weight.data[j]
new_b1[i] = wl1.bias.data[j]
new_fc2[:, i] = wl2.weight.data[:, j]
wl1.weight.data.copy_(new_fc1)
wl1.bias.data.copy_(new_b1)
wl2.weight.data.copy_(new_fc2)
```

My neural network outputs the same result before and after applying this function. It seems to work, but the learning becomes then erratic. I think I missed something with autograd, but I donâ€™t know what. It tried to manipulate the `Parameter`

and not the `Tensor`

objects like this:

```
def WSort(wl1, wl2, sorted_indices):
new_fc1 = wl1.weight.data.clone()
new_b1 = wl1.bias.data.clone()
new_fc2 = wl2.weight.data.clone()
for i in range(n):
j = sorted_indices[i]
new_fc1[i] = wl1.weight.data[j]
new_b1[i] = wl1.bias.data[j]
new_fc2[:, i] = wl2.weight.data[:, j]
wl1.weight = Parameter(new_fc1)
wl1.bias = Parameter(new_b1)
wl2.weight = Parameter(new_fc2)
```

But the learning phase (with SGD) gives even worse resultsâ€¦

Note that there are other linear layers before `wl1`

and after `wl2`

.

How should I manage my parameters/tensors in my NN to reorder the neurons in a layer?

Thank you