How to set per-pixel L1 loss for image synthesis?

Hi PyTorchers!
I’m trying to ‘per-pixel L1 loss’ for my image synthesis network.
It would be so much helpful if you check this correct

Firstly set L1 loss for per-pixel wise.

pixel_loss = torch.nn.L1Loss(size_average=False, reduce=False)

And, here’s a snippet of my training code

result_image = myModel(…) # result_image has a shape of( Nx1x256x256) where N is batch size.
loss = pixel_loss(result_image, answer_image)
loss.backward(loss)

so… is this correct?
and if I want to change ‘per-pixel’ loss to scalar loss,
would it be like this?

scalar_loss = torch.nn.L1Loss(size_average=True, reduce=True)
result_image = myModel(…)
loss = scalar_loss(result_image, answer_image)
loss.backward()

Any advice would be helpful. Thanks