How to limit trained parameter range when applying LBFGS?

I have a optimization problem, and I want to a LBFGS slover to limit the parameter’ range between 0 and 1.
the code like this:

x = torch.zeros((len(model.raw_control_names)), device=args.device)
param = nn.Parameter(x)
optim = torch.optim.LBFGS([param])
for iter in tqdm(range(args.total_iters)):
        def closure():
            optim.zero_grad()
            out = model.getVertx(x)
            loss = F.mse_loss(out, gt, reduction='sum')
            loss.backward()
            print(f'loss :{loss:.6f}')
            return loss

        optim.step(closure)

I want to limit “param” in range of 0 and 1. Is there method to achieve it?