Constant loss while training

Hello all,
I am getting constant loss while iteration.
Here is how i define my optimizer.

input_img=torch.randn(3,64,64,requires_grad=True, device='cuda')
optimizer=optim.Adam([input_img], lr=1e-05)

And after that i simply calculate loss and use optimizer.step() to update input_img.
But the loss is constant.I couldn’t find the mistake that i made.

Here is the iteration loop.

while True:
    input_img.data.clamp_(0,1)
    optimizer.zero_grad()
  
    input_img=input_img.view(3,64,64)
    input_img=norm(input_img)
    input_img=input_img.view(1,3,64,64)
    score=net(input_img)
  

    style_score = 0
    content_score = 0
      
    for ip in range(4):
          if ip==1:
              content_score=content_score+content_loss(content[1],score[1])
          else:
              style_score=style_score+style_loss(gram_matrix(style[ip]),gram_matrix(score[ip]))*10
    print('style_loss = ',style_score)
    print('content_loss = ',content_score)
    loss_total=style_score+content_score
    print('total_loss = ',loss_total)
    loss_total.backward(retain_graph=True)
    optimizer.step()
    count=count+1
    print('#################################',count,'#######################################')

Don’t use the .data attribute, as it might yield unexpected side effects.
Also, you are currently overriding input_img in:

input_img=input_img.view(3,64,64)
input_img=norm(input_img)
input_img=input_img.view(1,3,64,64)

so you should use another variable name to keep the original input_img tensor alive.