I have two external trainable parameter as follows:
bc_left_coeff = torch.tensor((1.1,), requires_grad=True, device=device)#.to(device)
bc_right_coeff = torch.tensor((1.1,), requires_grad=True, device=device)#.to(device)
params = list(PINN.parameters()) + [bc_left_coeff, bc_right_coeff]
I want to modify the bc_left_coeff
when a certain condition is met during the training. An example would be
for i in range(max_iter):
# Normal training loss calculation
data_loss = func1(args)
bc_difference_loss = func2(args)
total_loss = data_loss + bc_difference_loss
optimizer.zero_grad()
total_loss.backward()
optimizer.step()
# My special condition
if bc_left_coeff < 0.9:
bc_left_coeff = torch.tensor((1.1,), requires_grad=True, device=device)
The only problem is that after setting bc_left_coeff
to 1.1, the optimiser stops updating bc_left_coeff
. How do I continue to update the bc_left_coeff
with the manually modified value?
As a background, the problem is ill-posed with bc_left_coeff = 0
as a solution. I want to oscillate the value of bc_left_coeff
between two physically relevant values during the training. I tried clamp()
but it gets stuck to the lower bound of the two physically relevant values, which I don’t want.