Hi, I’m using another third-party code where the authors define a input x as nn.Parameter and computes the grad himself (without using loss.backward() for sparsity). So the code looks like

```
x = nn.Parameter(..., requires_grad=True)
# then come custom ops
# not loss.backward() here
sparse_grad()
optim = optim.RMSprop([x])
optim.step()
optim.zero_grad()
```

I checked x after the sparse_grad function, x.grad indeed gave me the correct output. And this should be correct as we are replacing torch’s backward with my own backward func. Now I want to continue the computation from the gradients of x to some other input tensor. Is it possible in torch?

Many thanks in advance!