# Un-normalizing the prediction with different loss function

Because the range of the target values is completely different and oscillates between 0 to 6000, the target should be normalized to train the model.
For example :

``````t_mean = torch.mean(target)
t_std = torch.std(target)
t_normal = (target - t_mean)/t_std
``````

How can I denormalize theprediction if the loss function depends on the first and second derivatives of the output of the NN w.r.t the input:

``````def P_loss(x,y):

eq  = 4*first_derivative[:,1] - second_derivative[:,0]

loss = torch.mean(eq**2)

return loss
``````

I think

``````pred = model(input)
pred = pred * t_std  +  t_mean
``````

does not work correctly.

@FA_mn not really an answer to your question but out of curiosity how this loss function works? I mean what is its difference from MSE and why do you use the derivatives (apologies if it is a newbie question but I am quite new to it).

Hello,
In some situation, this loss function can be used .
I use it for solving partial differential equations (PDEs). I can refer you here

I see, so are you using it for regressing the target values?

Also regarding your question, according to this post what you have tried seems to be correct. Did you try to apply the inverse transform to the targets and see whether you get back the initial values or not?

Thanks a lot for the link you mentioned.

In PDEs, I do not have the target values explicitly. If in regression, `x_i` is input, and `u_i` is its related target, I can use

``````loss = sum((u_i - y_i)^2, i=1,..,N),
``````

where `y_i` is the output of the network related to `x_i`.
However, in PDEs, I do not have `u_i` explicitly, and only I know other information, such as the values of the First- or Second-order partial differential equation. Therefore, I cannot use `loss = sum((u_i-y_i)^2, i=1,..,N)`.

I tried to apply the inverse transform in my PDEs, and I think it worked correctly. I should examine the suggestion mentioned in the link you introduced.
Thanks a lot.

1 Like