Use different optimizers according to training error

Hello everyone.

I am writting a single layer perceptron for multidimensional function fitting.

The point is that during the training process I would like to change the optimizer once a certain error is reached.

In this moment my code looks like this:
optthres = 1e-4 #Optimization method threshold.

    # Define optimizer.
    opt1 = torch.optim.Rprop(model.parameters())
    opt2 = torch.optim.LBFGS(model.parameters())

    if (loss.data[0] > optthres):
        opt1.zero_grad()
        loss.backward()
        opt1.step()
    else:
        def closure():
            opt2.zero_grad()
            y_pred = model(x)
            loss = loss_fn(y_pred, y)
            loss.backward()
            return loss
        opt2.step(closure)

Is there any other way to define different optimizers according to the error of the training process?
Thank you very much!