JIT inference time issue with variable batch-size

I have noticed when I perform inference with a unique batch size for the first time it takes around >1s to do inference. After that it only takes >0.01s. I think it recompiles the graph for different batch-size. Is it possible to compile graph for all batch sizes before performing inference (without passing tensors of every possible batch) so it doesn’t take too much time for initial inference. I can pad batch to fill it to fixed batch size but other models are larger and adding useless samples to batch will further increase time for cached batch.