Pickup a loss function for maximizing pos_x while minimizing neg_x

Hi all, I want to ask about the experience to pick up a suitable loss function for unsupervised problems. Here is a description of my current problem.

Assume I divide the samples into positive and negative, then get two groups of scores when passing them into my model, the dummy codes like:

pos_scores = model(pos_samples)    # (sample_num, 1)
neg_socres = model(neg_samples)    # (sample_num, 1)
pos_x = pos_scores.mean()
neg_x = neg_scores.mean()
loss = LossFunction(pos_x, neg_x)

Where I want to maximize the pos_scores while minimizing the neg_scores at the same time, there is no limitation of their values. Note that it is an unsupervised problem, thus no ground-truth label as a reference, only a calculated score for each sample.

One loss function I have tried is:

loss = neg_x - pos_x

by minimizing this loss function, the code tends to minimize neg_x while maximizing pos_x. But the resulted scores are not very brilliant. Thus, are there any other useful functions that could help to handle with this problem? Thanks all!