How to build the network code definition from model.eval() output?

Hi there,

I am new to pytorch and trying to understand.
I have a pretrained model from a paper. The code in the paper is immense and hard to understand, how the network was actually defined.
Still, I managed to load the pretrained model and evaluate it, so I got the following structure:

basic_conv1d(
  (0): Sequential(
    (0): Sequential(
      (0): Conv1d(12, 128, kernel_size=(8,), stride=(1,), padding=(3,), bias=False)
      (1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU(inplace=True)
    )
  )
  (1): Sequential(
    (0): Sequential(
      (0): Conv1d(128, 256, kernel_size=(5,), stride=(1,), padding=(2,), bias=False)
      (1): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU(inplace=True)
    )
  )
  (2): Sequential(
    (0): Sequential(
      (0): Conv1d(256, 128, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
      (1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU(inplace=True)
    )
  )
  (3): Sequential(
    (0): AdaptiveConcatPool1d(
      (ap): AdaptiveAvgPool1d(output_size=1)
      (mp): AdaptiveMaxPool1d(output_size=1)
    )
    (1): Flatten()
    (2): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (3): Dropout(p=0.25, inplace=False)
    (4): Linear(in_features=256, out_features=128, bias=True)
    (5): ReLU(inplace=True)
    (6): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (7): Dropout(p=0.5, inplace=False)
    (8): Linear(in_features=128, out_features=5, bias=True)
  )
)

I would love to define the network code myself, so I don’t have to work with the
supplied and very incomprehensible code.

Is there some kind of parser, that takes such a network evaluation and builds the code definition?
Or could anybody help me with this?

Thank you very much!!

I don’t think there is a parser, as the forward method is undefined.
I.e. while the submodules are created, it’s unclear how they are used (e.g. in a sequential manner or are different paths used inside the forward).
Assuming these layers are used in a sequential manner, you could recreate the modules by wrapping them in nn.Sequential containers, e.g.

self.layer1 = nn.Sequential(
    nn.Conv1d(12, 128, 8, 1, 3, bias=False),
    nn.BatchNorm1d(128),
    nn.ReLU(inplace=True)

One unknown would be the AdaptiveConcatPool1d, which doesn’t seem to be a core PyTorch layer, but seems to be defined in FastAI.