Hi, I’m trying to apply torch.compile() into my model.
I’ve seen guaranteed speedups with simple CNN structures, but there was a slowdown when ‘for loop’ is contained in the model class.
In other words, if there is a loop in the forward part of the model class (like this structure: GitHub - yinboc/liif: Learning Continuous Image Representation with Local Implicit Image Function, in CVPR 2021 (Oral)), the time per epoch is increased when torch.compile() is applied.
Is there any solution that I can avoid this trouble?
Unfortunately, it is not possible to remove the ‘for loop’.