How to oversample most classes while leaving one class imbalanced?

I modified some code from @ptrblck at https://discuss.pytorch.org/t/how-to-handle-imbalanced-classes/11264 and when I just change it the weights to [.3 .7] just as an example it does not work unless I’m misunderstanding you. Here’s what I did:

numDataPoints = 1000
data_dim = 5
bs = 100

# Create dummy data with class imbalance 9 to 1
data = torch.FloatTensor(numDataPoints, data_dim)
target = np.hstack((np.zeros(int(numDataPoints * 0.9), dtype=np.int32),
                    np.ones(int(numDataPoints * 0.1), dtype=np.int32)))

print('target train 0/1: {}/{}'.format(len(np.where(target == 0)[0]), len(np.where(target == 1)[0])))

class_sample_count = np.array(
    [len(np.where(target == t)[0]) for t in np.unique(target)])
weight = 1. / class_sample_count
weight[0] = .3
weight[1] = .7
print(weight)

samples_weight = np.array([weight[t] for t in target])

samples_weight = torch.from_numpy(samples_weight)
samples_weigth = samples_weight.double()
sampler = WeightedRandomSampler(samples_weight, len(samples_weight))

target = torch.from_numpy(target).long()
train_dataset = torch.utils.data.TensorDataset(data, target)

train_loader = DataLoader(
    train_dataset, batch_size=bs, num_workers=1, sampler=sampler)

for i, (data, target) in enumerate(train_loader):
  print("batch index {}, 0/1: {}/{}".format(i, len(np.where(target.numpy() == 0)[0]), len(np.where(target.numpy() == 1)[0])))

and the output was:

target train 0/1: 900/100
[0.3 0.7]
batch index 0, 0/1: 82/18
batch index 1, 0/1: 72/28
batch index 2, 0/1: 71/29
batch index 3, 0/1: 74/26
batch index 4, 0/1: 82/18
batch index 5, 0/1: 74/26
batch index 6, 0/1: 81/19
batch index 7, 0/1: 83/17
batch index 8, 0/1: 82/18
batch index 9, 0/1: 78/22

Are you sure your method will work if the dataset is imbalanced to start off with?