Torch JIT Static Runtime Restrictions

I am looking at the new static runtime feature inside torch.jit on the C++ side. I was wondering if there are any restrictions on the model being run? For example, if the model does a backwards pass during inference, does the static runtime still support this?

If it doesn’t, is there a reliable way to run inference on the model on many CPU threads simultaneously? Deepcopying the torch script module doesn’t seem to create a new underlying graph.