How to combine two models to one output and train?

Hi
I am trying to implement the article “dense and low-rank gaussian crfs using deep embeddings”, in which two models are trained together from a single input. Then, the outputs of the models are combined to a single linear equation which eventually its output will be used in my loss function. i.e.:

one model will output B (vector), the other model outputs the matrix A, then both enter a single layer having the relation Ax=B. and x that minimizes the error in the equation will be used to calculate the loss, L(x,y) while y is ground truth.

For the combined layer Ax=B I can use conjugate gradients to find dL/dA and dL/dB. how will I combine the loss backpropagation all together?

L(x,y) -> loss.backward() -> Ax=B -> dL/dA and dL/dB -> some pytorch bacward loss implementation.

Both models used are resnet or similar. Having Pytorch implementation

Thanks for all helpers