Difference between aot_function and aot_module in AOT Autograd?

https://pytorch.org/functorch/nightly/aot_autograd.html

Thanks for your work, now I want to try a tiny example in pytorch 2.0…

I have some questions,

  1. Does aot_module perform compilation like aot_function? If I want to compile the forward graph and backward graph of a PyTorch DNN, should I call aot_module or aot_function?

  2. In the link: AOT Autograd - How to use and optimize? — functorch 1.13 documentation

You stated “AOT Autograd provides simple mechanisms to compile the extracted forward and backward graphs through deep learning compilers, such as NVFuser, NNC, TVM and others.”, how can I send the graphs to TVM? Can you give me some idea or tiny example?

  1. On colab, I tried to use aot_module to perform compilation, but the acceleration effect is very limited. Is there something wrong with my code?

My code is:

from functorch.compile import aot_function, aot_module, draw_graph
import torch.fx as fx
import torch
from functorch.compile import ts_compile
import numpy as np

class Net(nn.Module):
def init(self):
super().init()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)

def forward(self, x):
    x = self.pool(F.relu(self.conv1(x)))
    x = self.pool(F.relu(self.conv2(x)))
    x = torch.flatten(x, 1) # flatten all dimensions except batch
    x = F.relu(self.fc1(x))
    x = F.relu(self.fc2(x))
    x = self.fc3(x)
    return x

net = Net()

nf = aot_module(net, fw_compiler=ts_compile, bw_compiler=ts_compile)

trainloop(nf)…

Thanks in advance!