The error is raised because torch.fx
has trouble tracing x = pad_same(x, self.kernel_size, self.stride)
which is used in this timm
model.
You could add this method to autowrap_functions
as seen here:
feature_extractor = torchvision.models.feature_extraction.create_feature_extractor(
model, return_nodes=attention_layer_names, tracer_kwargs={'autowrap_functions': [timm.layers.padding.pad_same]})
which will then fail with:
ValueError: node: 'stages.0.blocks.0.attn_block.attn.attn_drop' is not present in model. Hint: use `get_graph_node_names` to make sure the `return_nodes` you specified are present. It may even be that you need to specify `train_return_nodes` and `eval_return_nodes` separately.
as it seems this module is not used.
It may be easier using forward hooks as described in this post instead.