Following this colab tutorial, I can get GraphModules of a compiled model, but during op replacement, I found something strange. Here’s what I found:
First, we have a very simple ResNet18 model:
from torchvision.models import resnet18
model = resnet18()
model.eval()
and then we can compile this model and get its FX graph:
graph = []
def toy_backend(gm, inputs):
graph.append(gm)
return gm.forward
fn = torch.compile(model=mdoel, backend=toy_backend)
# run for compilation
inputs = torch.randn(1, 3, 224, 224)
output = fn(inputs)
we can print some modules and output shape:
>>> graph[0].self_fc
Linear(in_features=512, out_features=1000, bias=True)
>>> fn.fc
Linear(in_features=512, out_features=1000, bias=True)
>>> output.shape
torch.Size([10, 1000])
Ok, here we can replace the last fc
layer with different parameters from FX graph, then run again:
graph[0].self_fc = torch.nn.Linear(512, 10)
output = fn(inputs)
print again:
>>> graph[0].self_fc
Linear(in_features=512, out_features=10, bias=True)
>>> fn.fc
Linear(in_features=512, out_features=1000, bias=True)
# see, the fn.fc still a 512x1000 Linear layer
>>> output.shape
torch.Size([10, 10])
as one can see, after replacing older Linear layer with new one through FX graph, the compiled model still has the older version of module, but the output is what we like it to generate…