def build_conv_lstm(filters, kernel_len, lstm_hidden_size):
beta = 1e-3
model = Sequential()
model.add(Conv1D(filters=50, kernel_size=3, input_shape=(151, 4), kernel_regularizer=l2(beta), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling1D())
model.add(Dropout(0.5))
model.add(LSTM(units=50, return_sequences=True, kernel_regularizer=l2(beta), recurrent_dropout=0.1))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(150, kernel_regularizer=l2(beta), activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, kernel_regularizer=l2(beta), activation='sigmoid'))
optim = optimizers.Adam(lr=0.0003)
model.compile(optimizer=optim, loss='binary_crossentropy')
return model
This tutorial might be a good starter as it explains how a custom neural network is written in PyTorch.
Since your model is quite simple, you could also directly use nn.Sequential
and add the desired layers to this container.
Some layer names would need to be changed (e.g. Activation('relu')
→ nn.ReLU
, Dense
→ nn.Linear
).