Call backs based on validation loss using ray tune

I would like to request for any examples of hyperparameter tuning using PyTorch and ray tune with respect to validation loss to avoid over fitting my model.

Any link or example would work for me, thanks in advance.

Hi @nivesh_gadipudi, here are a couple examples for using Ray Tune with Pytorch.

An easy way to prevent overfitting is simply to return “done=True” when validation losses begins to diverge.

tune.report(done=True, ...)
# or if using class API:

def step(self):
   ...
   return {"done": True, ...}

Hope that helps!

1 Like