Keep model in memory between CLI calls?

I am using a pretty bodged method of saving images from a 3rd party application, calling my Python script for inference referencing those images, and then importing the result into the 3rd party application again. These calls are made through Javascript with a script that only lives for a few milliseconds. So each time I call the python script it needs to load my model into memory again.

Is there any way to keep a model in memory between my calls to inference script?

Maybe it could be possible to call one script that loads the model into memory and another one that performs inference (and can somehow access that model)?

The best option seems to be to create a local webserver.