How to do constrained optimization in PyTorch

What’s the proper way to do constrained optimization in PyTorch?

For example, I want each parameter of my model to be bounded both from above and below by some constants cLow and cHigh.

That is, if W is the d-dimensional (flattened) weight vector of my model, I’d like to enforce
cLow < W[i] < cHigh for i = 1, 2, … d. How can I do that?

2 Likes

You can do projected gradient descent by enforcing your constraint after each optimizer step. An example training loop would be:

    opt = optim.SGD(model.parameters(), lr=0.1)
    for i in range(1000):
        out = model(inputs)
        loss = loss_fn(out, labels)
        print(i, loss.item())
        opt.zero_grad()
        loss.backward()
        opt.step()
        with torch.no_grad():
            for param in model.parameters():
                param.clamp_(-1, 1)

The last three lines enforce the constraint that the weights fall in the range -1–1.

7 Likes

I am also working on constraints optimization problems. My experience with this suggestion is not positive. When I don’t use clamp_() and train model with no restriction then value of specific weights of interested is close to desired and model is predicting good results. But after using clamp_(), model performance degrade severely. What can be the possible reason for this.

1 Like

One reason is that this clamping is not communicated to the optimizer, and in particular destroys the gradients. So the optimizer falsely believes that it has moved the parameter in a certain direction, when in fact it is clamped to the same value as before.