Modify an external trainable parameter manually

I have two external trainable parameter as follows:

bc_left_coeff = torch.tensor((1.1,),  requires_grad=True,  device=device)#.to(device)
bc_right_coeff = torch.tensor((1.1,),  requires_grad=True, device=device)#.to(device)
params = list(PINN.parameters()) + [bc_left_coeff, bc_right_coeff]

I want to modify the bc_left_coeff when a certain condition is met during the training. An example would be

for i in range(max_iter):
    # Normal training loss calculation
    data_loss = func1(args)
    bc_difference_loss = func2(args) 
    total_loss = data_loss + bc_difference_loss
    
    optimizer.zero_grad()    
    total_loss.backward()
    optimizer.step()
    
    # My special condition
    if bc_left_coeff < 0.9:
        bc_left_coeff =  torch.tensor((1.1,),  requires_grad=True, device=device)

The only problem is that after setting bc_left_coeff to 1.1, the optimiser stops updating bc_left_coeff. How do I continue to update the bc_left_coeff with the manually modified value?

As a background, the problem is ill-posed with bc_left_coeff = 0 as a solution. I want to oscillate the value of bc_left_coeff between two physically relevant values during the training. I tried clamp() but it gets stuck to the lower bound of the two physically relevant values, which I don’t want.

Hi Prakhar!

At this point bc_left_coeff is a python reference to an object in memory,
namely the Tensor you just created. Let’s call this “Object 1.”

Then at some point you add “Object 1” to optimizer’s parameter list.
optimizer holds its own reference to “Object 1.”

When your “special condition” is satisfied you create a new Tensor that
is another object in memory. Let’s call this “Object 2.” You then set the
python reference bc_left_coeff to refer to “Object 2,” but doing so does
not change which object the reference inside of optimizer refers to – it
still refers to “Object 1.” The optimizer will still update “Object 1” (which
was not set to the new value of 1.1), but you won’t see this by looking at
bc_left_coeff. Looking at bc_left_coeff will show you “Object 2,” which
isn’t changing.

You need to modify the contents of “Object 1” itself rather than set
bc_left_coeff to refer to some new Tensor (such as “Object 2”).
You would do this as follows:

    # My special condition
    if bc_left_coeff < 0.9:
        with torch.no_grad():
            bc_left_coeff[0] = 1.1

Now bc_left_coeff still refers to “Object 1” (and optimizer still holds a
reference to “Object 1”), but you’ve changed “Object 1’s” value.

Best.

K. Frank