Implementing perceptual loss

Hello all,
I am trying to implement the perceptual loss function, however, I am getting a contrast issue when using it,
I have tried this:
loss = torch.norm((output-target),2)
where output and target are the features generated from the vgg19 network

and this

function  = nn.MSELoss()
loss = function(output,target)

and also this:

a = output - target
b = a.pow(2)
loss = torch.norm(b,2)

However, none of these are working, I have normalised the training data to [0,1] and used the imagenet mean and std values, can you tell me where I am going wrong?

Thanks for reading

Could you explain the contrast issue a bit?
I assume you are working on generating images and the contrast seems to be off?

Yes I am trying to generate some upsampled images, however, everything in the generated photo is much darker (features are sharper though), so say if the sky was white in one photo, in the generated photo it would be dark grey

Any solution would be appreciated