Hello all,
I was curious – for hyperparameter search which sometimes leads to issues like CUDA out of memory due to batch_size and/or image size, related parameters is this code structure an okay practice? If there is a better way of handling this I would love to hear comments!
for run_count, hyperparams in enumerate(HYPERPARAMS, start=1):
# define model, dataloaders, etc
try:
#Call Training Utils
except RuntimeError as e:
# Handle Possible Errors
with open("run_parameters.txt", "a+") as text_file:
text_file.write("*** Runtime Error: {} \n\n".format(e))
finally:
#Rest For Next Run
del model
del optimizer_ft
del exp_lr_scheduler
del dataloaders
gc.collect()
torch.cuda.empty_cache()