Optimizing Model Predictions with Multiple prediction Combinations as correct answer

I have a model that predicts values A and B, where A and B are the outputs based on the input. From these predictions, I calculate C_predict using the formula: C = A * (1 - exp(-1 / (B + epsilon))), where epsilon is a small value to prevent division by zero and B=>0 . During training, my ground truth for loss computation is C_groundtruth, but I also have ground truth values for A and B, although they are not directly used for loss computation. They serve to ensure the correctness of predicted values for A and B. The loss is computed using a loss function between C_predict and C_groundtruth.

However, there exist various combinations of A or B that can yield similar results for C. For instance, instead of predicting high values for B (which is the correct answer), my model might predict very low values for A. Is there a method to compel my model to predict the correct values, or is it inherently non-deterministic?