def build_model(n_features, n_features_2, n_labels, label_smoothing = 0.0005):
input_1 = layers.Input(shape = (n_features,), name = 'Input1')
input_2 = layers.Input(shape = (n_features_2,), name = 'Input2')
head_1 = Sequential([
layers.BatchNormalization(),
layers.Dropout(0.2),
layers.Dense(512, activation="elu"),
layers.BatchNormalization(),
layers.Dense(256, activation = "elu")
],name='Head1')
input_3 = head_1(input_1)
input_3_concat = layers.Concatenate()([input_2, input_3])
head_2 = Sequential([
layers.BatchNormalization(),
layers.Dropout(0.3),
layers.Dense(512, "relu"),
layers.BatchNormalization(),
layers.Dense(512, "elu"),
layers.BatchNormalization(),
layers.Dense(256, "relu"),
layers.BatchNormalization(),
layers.Dense(256, "elu")
],name='Head2')
input_4 = head_2(input_3_concat)
input_4_avg = layers.Average()([input_3, input_4])
head_3 = Sequential([
layers.BatchNormalization(),
layers.Dense(256, kernel_initializer='lecun_normal', activation='selu'),
layers.BatchNormalization(),
layers.Dense(n_labels, kernel_initializer='lecun_normal', activation='selu'),
layers.BatchNormalization(),
layers.Dense(n_labels, activation="sigmoid")
],name='Head3')
output = head_3(input_4_avg)
For a general introduction to writing custom PyTorch models, have a look at this tutorial.
To convert the TF model to PyTorch you should initialize all modules in the __init__
method of your custom model and use these modules in the forward
method.
The layers are almost equivalently named, i.e. layers.BatchNormalization
(assuming it’s working on temporal data) would correspond to nn.BatchNorm1d
, while e.g. layers.Dense
corresponds to nn.Linear
.
The “functional” layers, such as layers.Concatenate
can be applied in the forward
pass of your model using:
input_3_concat = torch.cat((input_2, input_3))