TorchDynamo - how to fallback to the fine grained subgraph?

Assuming that user’s custom backend only support compile a convolution operator, is it possible to config the torch dynamo / torch.compile to make it fallback to the fine grained subgraph automatically?

I assume you mean to partition the subgraph and only run supported ops on your custom backend. For this you might checkout this colab.

1 Like