I want to change the size of the tensors `weight`

and `bias`

in a `torch.nn.Linear`

layer. Even if I set `weight.grad = None`

and `bias.grad = None`

, the backward pass returns an error. A possible way to solve the problem would be an entire re-initialization of the computational graph, but I don’t know how to do it. However, here is a minimal code:

```
import torch
l = torch.nn.Linear(5, 4)
x = torch.randn(1, 5)
y = l(x).pow(2).mean()
y.backward()
l.weight.data = torch.randn(3, 5)
l.weight.grad = None
l.bias.data = torch.randn(3)
l.bias.grad = None
z = l(x).pow(2).mean()
print(z)
z.backward()
```

But I get the error:

```
RuntimeError: Function ExpandBackward returned an invalid gradient at index 0 - expected shape [4] but got [3]
```

Somehow, it seems that autograd keeps intermediary computations, and use it to recompute the gradients, hence a necessary re-initialization of the whole computational graph.

Since many objects in my code contain references to the layers I use, I don’t want to re-create a layer each time I am deleting a row or a column in it.