Hello ,
I have created a neural network like this : F1 = feedforward netwrok --> RNN --> F2 = feedforward netwrok and I want to train them but I’m having issues with my optimizer
# MLP_in-----------------------------------------------------------------------
model_in = MLPModel(input_dim, args.hidden_dim_1_in, args.hidden_dim_2_in, args.hidden_dim_3_in, args.output_dim_in)
if torch.cuda.is_available():
model_in.cuda()
# RNN--------------------------------------------------------------------------
model_RNN = LSTMModel(input_dim, args.hidden_dim, args.layer_dim, args.output_dim_rnn)
if torch.cuda.is_available():
model_RNN.cuda()
# MLP_OUT-----------------------------------------------------------------------
model_out = MLPModel(args.output_dim_rnn, args.hidden_dim_1_out, args.hidden_dim_2_out, args.hidden_dim_3_out, args.output_dim)
if torch.cuda.is_available():
model_out.cuda()
optimizer = torch.optim.SGD(model.parameters() , lr=learning_rate)
should I replace model.parameters() with their sum ? or there is another method to do this ?
And thank you.