Computational grad without detach.numpy in a function that only accept np.ndarray

I am attempting to input a tensor created with PyTorch into a simulation library called SimPEG. This tensor has the same matrix format required by SimPEG but is of type (torch.Tensor, requiere_grad=True) and also possesses gradient information. The integration process is proving to be very complex. Therefore, I am seeking a way to input the tensor or a np.ndarray into SimPEG using a proxy that can record all operations performed on it in memory, just as gradients do in PyTorch.

Thanks for your time!

It sounds like you are looking for Extending PyTorch — PyTorch 2.3 documentation?