Is it possible to plugin extra logic when serving a serialized model

Hi,

I am just wondering: it is possible to plugin extra logic between layers of the neural network model (serialized) during serving?

Any help or idea would be greatly appreciated.

Best,

Could you explain your use case a bit more?
I.e. what does serving mean in the context and what would you like to add?
Are you referring to some deployment software / platform or just a plain PyTorch model, you would like to change?

Thanks for the reply.

Yeah, sure. I am referring to the prediction process once the serialized model (say, a DNN model) is loaded.

I am not sure whether pytorch serving supports extra logic injection between each of the neural network layer (during the prediction process). I am suspecting that JIT does not provide such API. Does it require fundamental changes to the source code to enable such a feature?

Best,