TraceError: symbolically traced variables cannot be used as inputs to control flow

Hello,
I am getting an error when trying to create a feature extractor using the create_feature_extractor() function (from torchvision.models.feature_extraction import create_feature_extractor). What I wanna do is:

  1. I create a model using timm library like this

model = timm.create_model('maxvit_xlarge_tf_512.in21k_ft_in1k', pretrained=True)

  1. I take attention layers
attention_layer_names = []
layer_name = 'attn_drop'

for name, module in model.named_modules():
    if layer_name in name:
        attention_layer_names.append(name)
  1. Try to create a feature extractor
    feature_extractor = create_feature_extractor(model, return_nodes=attention_layer_names)

I am getting this error

TraceError: symbolically traced variables cannot be used as inputs to control flow

How could I solve it?

The error is raised because torch.fx has trouble tracing x = pad_same(x, self.kernel_size, self.stride) which is used in this timm model.
You could add this method to autowrap_functions as seen here:

feature_extractor = torchvision.models.feature_extraction.create_feature_extractor(
    model, return_nodes=attention_layer_names, tracer_kwargs={'autowrap_functions': [timm.layers.padding.pad_same]})

which will then fail with:

ValueError: node: 'stages.0.blocks.0.attn_block.attn.attn_drop' is not present in model. Hint: use `get_graph_node_names` to make sure the `return_nodes` you specified are present. It may even be that you need to specify `train_return_nodes` and `eval_return_nodes` separately.

as it seems this module is not used.
It may be easier using forward hooks as described in this post instead.

1 Like