Hi, I’m new in pytorch and try to implement the network for image enhancing.
I managed to run the code below with some warnings(numpy and softmax), but I have some problems in there:
I designed MSE loss function newly for generated images (consists of High Frequency factors in DWT),
and added to loss function for generator like this:
loss_g = criterion_g(imgHR, imgSR.cpu()) + myMSELoss(imgSR_W, imgHR_W)# + criterion_val(validity.cpu(), valid.cpu())
I saw the loss_g.item() has right value,
but it doesn’t seem to be applied in grad. (= .grad unchanges whether myMSELoss is added or net, it generates same images)
For training discriminator, I made some loss function like below:
val_LR, aux_LR = discriminator(imgLRd.cuda()) val_HR, aux_HR = discriminator(imgHRd.cuda()) val_SR, aux_SR = discriminator(imgSRd.cuda()) loss_d_LR = (criterion_d(aux_LR, labels) + criterion_val(val_LR, fake)) / 2 loss_d_HR = (criterion_d(aux_HR, labels) + criterion_val(val_HR, valid)) / 2 loss_d_SR = (criterion_d(aux_SR, labels) + criterion_val(val_SR, fake)) / 2 loss_d = (loss_d_HR + loss_d_LR + loss_d_SR) / 3 loss_d.backward()
I saw some solutions for multiple losses in here, I cannot figure out what is wrong with this code.
There are gradients being updated for each trials, but the loss is not converged and have bad performance in test set.
(I like to make the discriminator to classify well for all the images which have various resolutions)
In discriminator, I have two outputs; class label and validity.
And I used pre-trained resnet18 weights as below.
I think it isn’t wrong to append some layers like that, but I don’t have much confidence.
Thanks in advance.
here is the my code: