I have a dataset A that contains the high-resolution and low-resolution of flower images. They are labeled as 6 classes. During training, I will randomly select 16 high resolution and 16 low-resolution image and feed them to the network (total we will feed 32 images–batch size =32). A cross-entropy loss will be used as a loss function to train the classifier.
I want to assign the higher weight for training high-resolution image and lower weight for training low-resolution image. Is it possible in pytorch? And how to do it? The dataloader will return likes
high_res_image, class_high_res, low_res_image, class_low_res = data
For above code, we just use the batch size of 16 instead of 32 because we feed the data seperately. Any good suggestion to use full 32 images and assign different weight for loss? Thanks.
@ptrblck: It worked. However, the loss returned a maxtrix (assume I worked on semantic segmentation with 2 class). The weights are assigned for each batch as 1.0, 2.0 3.0 and 4.0. The total loss will be take the average of the loss after multiplying with the weight. I have tried the loss_total = torch.mean(loss * weights) but it did not work (weights size of [3]). So, I have to use the loop for. Do you have any suggestion to make it vectorization for the loop?
This is my code
import numpy as np
num_class =2
b,h,w =4,8,8
input = torch.randn((b, 1, h, w), requires_grad=True)
target = torch.empty((b, h, w), dtype=torch.long).random_(num_class)
pred = torch.rand((b, num_class, h, w), dtype=torch.float)
criterion = nn.CrossEntropyLoss(reduction='none')
loss = criterion(pred, target)
weights= torch.from_numpy(np.asarray([1.0, 2.0, 3.0, 4.0]))
#loss_total = torch.mean(loss * weights)
loss_total =0
for i in range (b):
loss_total += loss [i] * weights[i]
loss_total = torch.mean(loss_total / b)
print (loss_total)