How to use loop length as an optimization parameter

Hi, I am trying to use the length of a loop as an optimization parameter.

Here is an example code illustrating the problem (the real problem is more complex):

import torch

positive_iter = torch.tensor([10.0], requires_grad=True)
negative_iter = torch.tensor([20.0], requires_grad=True)

optimizer = torch.optim.Adam([positive_iter, negative_iter], lr=0.02, betas=(0.5, 0.999))

for i in range(100):
    loss = torch.tensor([0.0], requires_grad=True)
    for p in range(int(positive_iter)):
        loss = loss + torch.rand(1)
    for n in range(int(negative_iter)):
        loss = loss - torch.rand(1) * 2

    loss = torch.abs(loss)
    loss.backward()
    optimizer.step()

    print(i, loss.item(), positive_iter.item(), negative_iter.item())

It does not seem to work:

0 16.121417999267578 10.0 20.0
1 16.20305633544922 10.0 20.0
2 14.316006660461426 10.0 20.0
3 15.411490440368652 10.0 20.0
4 13.62910270690918 10.0 20.0
5 20.098087310791016 10.0 20.0
6 16.164840698242188 10.0 20.0
7 11.07910442352295 10.0 20.0
8 15.867802619934082 10.0 20.0
9 18.05478286743164 10.0 20.0
...continued
90 11.088855743408203 10.0 20.0
91 13.985483169555664 10.0 20.0
92 15.015034675598145 10.0 20.0
93 16.076112747192383 10.0 20.0
94 19.00041389465332 10.0 20.0
95 20.184921264648438 10.0 20.0
96 18.890005111694336 10.0 20.0
97 14.863526344299316 10.0 20.0
98 16.529191970825195 10.0 20.0
99 14.828125 10.0 20.0

Please advise on how to make this work

I do not think the positive_iter is differential. Can you do the math to show the
derivative?

1 Like