Varied batch size for compiled model

I have an experiment setting where I have a different batch size at each iteration during the training. I am using torch.compile() for my model. Whenever the model sees new batch size, it re-compiles and thus the whole process becomes extremely slow.

Is it possible to have a compiled model, which doesn’t re-compile every time it sees new batch size?

Yes you can set dynamic=True for now. Another option is to warm up and precompile your model with multiple batch sizes. Eventually we’ll probably have support for nested tensors with torch.compile to make this easier.