Torchscript realtime: running multiple inferences from the same model

Hello!
I’m writing some software (in cpp) to load torchscripts and run their methods in realtime for audio applications (code on github).

My question is: is it safe to run multiple independent inferences concurrently from the same loaded Module? I would guess it’s not, also because methods could have side effects…

But then what would be the way to go? I’ve tried to use clone so far, but I would like to hear from this community what would you do?