Using output layer vector as weights to the next layer

Im looking to force some mathematical model on the neural network architecture. I want to train two neural networks such that one estimates a vector and the second estimates another vector, but the final output is the convolution between the outputs of the two networks where the target vector would be compared to the output of the convolution. shortly, im trying to estimate the FIR filter coefficients that change dynamically as a function of the input data. how can i achieve this in Pytorch? im having trouble understanding how and if the backprop will work in this scenario…
for more clarification, one can say that the two networks operate in parallel and at the final layer they merge and the loss is calculated at the merged layer.

You can directly implement your idea in passing the outputs of both models to an additional conv layer or functional call since Autograd will track all differentiable operations.

so suppose signal_k, signal_m are the outputs from both networks, using for example np.convolve(signal_K,signal_M) in the final stage of the forward pass would be valid? (or an equivalent function in torch)

Yes, a convolution is valid but you should stick to PyTorch operations to avoid detaching the computation graph.
Something like this should work:

modelA = Model()
modelB = Model()

outA = modelA(input)
outB = modelB(input)

out = F.conv2d(outA, outB)