Can I use weighted sample based dataloader for performing regression tasks

I have used weighted sample dataloader for performing classification task where the objective of the model is to determine which class does the image belong.

I also have another model,which given an image predicts it’s age weight and body tone.The dataloader for this model uses random sampling.Compared to the previous model,its performance isn’t that great,so I was wondering can I use weighted sample based dataloder to perform a regression task,and if yes then what changes would I have to make in my following code shown below.

Classification task
Weighted dataloader sampling for my dataset
contains apprx 5000 images belonging to  8 classes

def obtain_class_weights(img_dataset):
  class_count=[i for i in get_class_distribution(img_dataset).values()]
  return class_weights_all
train_weighted_sampler = WeightedRandomSampler(
Each batch consists of image and its label which belongs to one of the eight classes

Data loader used for regression purpose.

here each batch consists of image and a label of dimension 3x1

Hi @sparshgarg23 that’s an interesting question. In classification, weighted sampling helps normalize overrepresentation of discrete classes; not sure how that strategy works for regression where the targets are continuous.

Maybe you can try using the inverse of the target density instead of counts?

thanks any tips on how to obtain the target density?
I found this article about transformed regressors in sklearn,but I am not sure on how to integrate it with pytorch.Any tips or suggestions would be welcome.

I am not an expert, but you could try KernelDensityEstimator in sklearn. AFAIK you can fit it to your targets, get densities for your target domain and use their inverse to weight your samples. All this logic can be added in your obtain_class_weights(). Hope this helps!