Onnx conversion - Dense layer instead of gemm

Hi,
lately I converted a pytorch model into onnx (please see model and conversion code below). It is a model with several Dense layers in a row. The model structure itself is garbage, please focus on the translation. The model is translated into a sequence of gemm, non-linearity and eltwise operations. I expected the onnx-model to contain Dense Layer. In fact, I need Dense layers for a tool that takes onnx as input. However, I was not able to generate it that way.

My question now is: How to convert the model into a sequence of Dense layers instead?

The shape of input_for_onnx is “torch.Size([75])”. I have tried dummy inputs as well with shape (1,75).

class vanilla(nn.Module):
    def __init__(self):
        super(vanilla,self).__init__()
        #self.bn = nn.BatchNorm1d(num_features=75)
        self.linear1 = Linear(75,75)
        self.linear2 = Linear(75,75)
        ...
        self.linear18 = Linear(75,75)
        self.linear19 = Linear(75, starting_neurons)
        self.linear20 = Linear(starting_neurons, 43)

    def forward(self, X):
        origin = X
        X = F.relu(self.linear1(X))
        X = F.relu(self.linear2(X))
        X = F.relu(self.linear3(X))
        X += origin
        X = F.relu(self.linear4(X))))
        ...
        X = F.relu(self.linear19(X))
        X = self.linear20(X)
        return X
        
batch_size= 1

torch.onnx.export(model.cpu(),                     # model being run
                      input_for_onnx.cpu(),         # model input (or a tuple for multiple inputs)
                      "gtsrb.onnx",   # where to save the model (can be a file or file-like object)
                      export_params=True,        # store the trained parameter weights inside the model file
                      opset_version=10,          # the ONNX version to export the model to
                      do_constant_folding=False,  # whether to execute constant folding for optimization
                      input_names = ['input'],   # the model's input names
                      output_names = ['output'], # the model's output names
                      dynamic_axes={'input' : {0 : 'batch_size'},    # variable length axes
                                    'output' : {0 : 'batch_size'}})

PS: I posted in onnx github but was redirected to pytorch forum, as this is not a matter of onnx itself, but how the model is translated by pytorch.