Best practices for testing many different architectures

Hi,
i am currently testing two model types (LSTM, Transformers) with different input representations (Embeddings, One-Hot-Encoding) and some different transformations applied beforehand. The transformations influence the shape of the input representation (e.g. the embedding).

I define the model, input and transformation in a yaml file for configuration.

At the moment i have basically specified two models (LSTM, Transformer). Based on the configuration i load the correct transformations and input representations and pass them to the model and dataset.

Is this a valid approach?
The alternative would be to create one model definition for each model, embedding, transformation combination. This would create some repeated code, but less parts that are dependent on each other.

Thanks in advance

As the alternative, you’re basically describing a rewrite with high coupling, that is mostly inferior when you have swappable model parts using abstract interfaces. Manually repeated code is always a thing to avoid.