Clipping parameter values during training

I’m working on an adversarial attack project on image classifiers, as the adversarial image being the only parameter for the optimizer and there’re upper and lower bounds for pixel values in this adversarial image, how can I clip those pixel values during training? A showcase of this problem is in following block of code:

adv_image = eps * torch.rand((image shape)).to(device)
adv_image.requires_grad_(True)
optim=torch.optim.SGD([adv_image], lr=1e-3)
for epoch in num_epochs:
    optim.zero_grad()
    loss=get loss after some operations
    loss.backward()
    optim.step() 
    # how can I perform clipping on a leaf node during optimization?
    # my current idea:
    tmp = adv_image.clone()
    tmp.clamp_(min_value, max_value)
    adv_image = tmp.clone()
    optim = torch.optimizer([adv_image], lr=1e-3)

This approach should work, if your optimizer doesn’t have internal states, which is the case for SGD.
However, you also should be able to wrap the clamp operation into a torch.no_grad() block instead of applying it on a cloned tensor:

with torch.no_grad():
    adv_image.clamp_(-1, 1)