Need i to create a copy of the original image and output if there are multiple losses?

I am working on an image-to-image translation task.
There are multiple losses apart from L1 and GAN loss.
I notice that the copies of the original image and output are created for GAN loss.
I do not understand why would we do that?
if I computed another loss, need i also create a copy?
I tried with several different cases.
one case is without copy, but the result (it is computed using the original image or output, and it will be used for computing the loss) is assigned to a new object.
the error is: element 0 of tensors does not require grad and does not have a grad_fn. (also came across: leaf variable has been moved into the graph interior )
then I tried with a copy, but came across: no grad accumulator for a saved leaf.
what should i do?

Some code is as follows:
#loss_l1 = criterionL1(output, gt)
loss_l1 = 0
#with torch.no_grad():
H_gt = gt
H_gt = get_H(gt, H_gt)
H_output = output
H_output = get_H(output, H_output)
H_loss_l1 = criterionL1(H_output, H_gt)

if i use loss_l1 = criterionL1(output, gt), the error is: one of the variables needed for gradient computation has been modified by an inplace operation…
but i just set loss_l1 = 0, it is ok.
why could it be so, what is the problem?