To learn them would not make sense for me since your network will only learn to minimize the weights which will automatically minimize the partial losses and thus the total loss.
In the DCGAN Tutorial the preferred methodology is the second when making the two forward passes with real and fake data in the discriminator ⌠any friendly soul willing to comment and explain why this is the best method?
How do we then make the weights learnable? I was looking at this discussion. Wonât those weights also be minimized? or is it because they are using nn.parameter(), it will be ok? @ptrblck
Basically: I understand that adding weights as learnable is a bad idea, because the model is going to find that making the weights zero is the best optimization. However, what i dont understand is what they are doing in that discussion pasted above.
I went through the discussion again and I donât think the weights are learnable, but are instead weighting the individual losses and thus scaling the gradients.
I might be wrong, so feel free to point me to the right code.
Hi, I am trying to replicate the results of the pix2pix conditional GAN. In the paper, it says that the loss contains the L1 loss. Here is the objective function of the loss mentioned in the paper. (link to paper: https://arxiv.org/pdf/1611.07004.pdf)
When observing the objective function, the L1 loss seems to affect only the generator. Therefore I used the loss function as below.
bce=nn.BCELoss().to(device)
L1=nn.L1Loss().to(device)
k=50
def criterion_G(y_hat,y):
loss=bce(y_hat,y)+k*L1(y_hat,y)
return loss
def criterion_D(y_hat,y):
loss=bce(y_hat,y)
return loss
But the model does not seem like properly training (may be it is normal). Can someone help me with this?
How can I implement the above mentioned loss ?
I donât know which approach you are referring to, as the discussion seems to mention different use cases.
Could you give an example of your use case and compare it to the âworseâ approach?
This is an interesting idea and I would recommend to go for it.
Feel free to post an update, if you have some insights into this kind of penalty weighting.