Hyperparameter tuning for LSTM model with Optuna

Hello,
I’m new with pytorch-forecasting framework and I want to create hyperparameter optimization for LSTM model using Optuna optimizer. My problem is that I don’t understand what means all of RecurrentNetwork’s parameters ( from here RecurrentNetwork — pytorch-forecasting documentation ) .

I have a time-series problem with univariate dataframe. I’m thought of the following parameters but there are more and I want to know if I have to use them for my situation or let them by default?

# define function
def objective(trial):

    lstm_param = {
      
        'cell_type': "LSTM",
        'optimizer': 'sgd',
        
        'hidden_size': trial.suggest_int('hidden_size', 2, 512, step=2),
        'rnn_layers': trial.suggest_int('rnn_layers', 2, 15),
        'dropout': trial.suggest_categorical('dropout', [0.1, 0.2, 0.3, 0.4, 0.5]),
        # 'learning_rate': trial.suggest_categorical('learning_rate', [0.0001, 0.001, 0.01, 0.1, 1.0])
    }

    # Generate model
    lstm = RecurrentNetwork.from_dataset(training, **lstm_param)
    trainer.fit(lstm, train_dataloader, val_dataloader)
                           
    best_model_path = trainer.checkpoint_callback.best_model_path
    best_lstm = RecurrentNetwork.load_from_checkpoint(best_model_path) 
    

I really appreciate any help!