Hi,

the problem is that instead of _a and _b i have the model parameters which are already defined and they should also be optimized (in this case _a und _b do not, see my small modification to you code). So i want as a summary that the model weights are getting optimized using autograd and my matrix contains the new weight values automatically.

This code with the small modification works but with a convolution layer it is complicated or even not possible, because my matrix contains the 2D representation of the weight tensor plus some modifications (padding between the rows and columns etc), e.g. let w0,w1,w2,w3 be the model weights (convolutional layers with many channels) than my matrix is e.g. defined as [[W0,W1],[W2,W3]] s.t. W0 is a modification of w0 incl. reshaping to get 2D matrix + padding between the rows and columns etc, and my goal ist that W0,W1,W2,W3 get updated when w0,w1,w2,w3 do. I donβt want to define every time my matrix to save execution time.

your modified code:

```
import torch
import torch.optim as optim
# some dummy tensors that have the same shape as `a` used to initialize `x`
_a = torch.tensor(1.)
_b = torch.tensor(2.)
x = torch.vstack((_a, _b))
# now we finally create `a`
a = x.select(0, 0)
_a=a
a.requires_grad_(True) # a.is_leaf is True
opt = optim.Adam([a]) # optimize `a`
loss = a ** 2
loss.backward()
print(_a)
print(a)
print(x)
opt.step()
print(_a) # should be updated as well
print(a)
print(x) # should be updated as well
```

Here an example of my goal:

```
import torch
import torch.optim as optim
# original weight matrix
a = torch.tensor([[[1.,2],[3.,4]],
[[1.,1],[2.,2]]]) # shape [2,2,2]
# create a modificated weight matrix from the original one
a_new=torch.cat((a[0],torch.zeros(2,2),a[1])) #shape[6,2]
a.requires_grad_(True) # a.is_leaf is True
opt = optim.Adam([a]) # optimize `a`
loss = a.sum()
loss.backward()
print("a",a)
print("a_new",a_new)
opt.step()
print("================")
print("================")
print("a",a)
print("a_new",a_new) # should be updated as well
```