Automatically detect input size

I’m trying to build a multilayer perceptron for sentiment classification. I am using skorch for cross validating and to integrate a pipeline that performs the hashing trick. I want to optimise the number of features in the hashing trick and therefore the input dimension is going to change every time I change that value. I was wondering if there is a way to automatically detect the input size within the model class.

This is my model:

class MLP(nn.Module):
    def __init__(self, input_dim=5000, num_hidden = 1, hidden_dim=1, output_dim=4, dropout=0.5):
        # Building the network from here
        super(MLP, self).__init__()
        
        # Hidden layers
        self.linears = nn.ModuleList([nn.Linear(input_dim, hidden_dim) if i==0 else nn.Linear(hidden_dim, hidden_dim) for i in range(3)])
        
        # Output layer
        self.ol = nn.Linear(hidden_dim, output_dim)
        
        # Activation functions
        self.dropout = nn.Dropout(dropout)
    
    def forward(self, data, **kwargs):
        # To float
        X = data.float()
        
        # Hidden layers
        for i, hl in enumerate(self.linears):
            X = self.linears[i](X)
            X = F.relu(X)
            X = self.dropout(X)
        
        # Output layer
        out = self.ol(X)
        out = F.softmax(out, dim = -1)
        
        return out

Currently it has the input size as an argument but I want the code to automatically detect it. How can I do that?

I don’t think it would make a difference, but you could pass the expected input tensor to the __init__ method and get the feature dimension via input_dim = x.size(1) assuming the features are in dim1.
Alternatively, you could also reuse your code and use model = MLP(x.size(1), ...) instead.