Connecting two seq2seq models implemented in different workspaces

Hi All,

I am currently trying to find a way to connect two seq2seq models together in a way that the output of one is the input to another deep neural model.

Both models are pre-trained. However, each of the neural models are implemented in different frameworks using different libraries. Is there a way to optimize the models that are implemented in different workspaces together?

In other words, to use the loss calculated in the second network in addition to update the parameters of the first network.

I would appreciate any help. Thank you!

No, I don’t think you can optimizer both models together using different frameworks.
While the inference use case might work, I’m not aware of any tool allowing the backward pass to cross between frameworks.

1 Like

Thank you so much for your reply!

I was thinking of stacking two neural network models together in a same workspace, and pass the output of one as the input to another one. Do you think this way two networks can be optimized together?

I highly appreciate your input regarding this matter.
Thank you

No, I don’t think so, as the frameworks would have to use a common Autograd implementation.
E.g. while PyTorch uses its Autograd backend, creates the computation graph during the forward pass, and stored the .grad_fns as attributes of activation tensors, the second framework would need to be able to automatically combine the PyTorch output to it’s “backend engine” (however it’s implemented).
That being said, you might be able to manually combine different frameworks together.