Displaying and Saving the Best Trial During Hyperparameter Search with Optuna in Parallel GPU Environment


I am conducting hyperparameter optimization using Optuna integrated into the PyTorch Imagenet example, while parallelizing across GPUs. However, I am unsure how to display and save the results of the best trial (optimal trial).


I am using the study.optimize method from Optuna as shown in the code below. mp.spawn is utilized to distribute the workload across each GPU. The issue arises in capturing and saving the results of the best trial while maintaining GPU parallelization. How can this be achieved?

    lambda trial: mp.spawn(
        objective, args=(ngpus_per_node, args, trial), nprocs=ngpus_per_node

def objective(gpu, ngpus_per_node, args, trial):
    return main_worker(gpu, ngpus_per_node, args, trial)


What is the recommended approach to display and save the best trial while performing hyperparameter search with GPU parallelization using Optuna? Is there a specific method or practice within Optuna or PyTorch that facilitates this?