How to add an MLP in a feedback loop (not just feedforward as usual in NN) to implement kind of recurrent NN with arbitrary structure?

I am trying to implement a neural network in a manner of control systems using feedback loops in order to develop a multichannel control system. But RNN-like layers don’t suit for this purpose. I am a little confused how can we make such kind of thing in PyTorch and how can we train that? Is it possible?

I don’t fully understand your description but PyTorch allows you to use loops and thus also to feed the model output to itself again.

I mean something like in this picture. But imagine all blocks are different MLP.