How to construct a tensor with parameters without breaking graph

I have discovered that I am severing the computational graph when constructing a parameterized tensor:

class Model(nn.Module):
 def __init__(self) -> None:
        super().__init__()
        self.A_params = nn.Parameter(torch.tensor([0.3, 0.1]), requires_grad=True)
        self.update_ss()
def update_ss(self):
        k_a, k_b = torch.sigmoid(self.A_params)  # constrain to [0, 1]
        self.A = torch.tensor([[-k_a, 0], [k_a, -k_b]]) # does not preserve gradient??

How do I construct my tensor (A) while preserving the gradient from A_params to A

A= torch.zeros((2,2))
A[0,0] = -k_a
A[1,0] = k_a
A[1,1] = -k_b

This works, but I feel like there is a better way…

Please see if torch.stack() will help you?
See-

import torch

a = torch.tensor([3.0, 4], requires_grad=True)
x = (a+2)*3*torch.exp(a)
y = torch.tensor([5.0, 6], requires_grad=True)

z = torch.stack((x, y)) # graph not disturbed
z.sum().backward()
print(a.grad) # tensor([ 361.5397, 1146.5612])

Feel free to post if you face errors with this.