R2 score return -inf

So I want to evaluate my regression model that has predictions and targets of the shape (batch_size, 236) and I pass this into torchevals r2_score

from torcheval.metrics.functional import r2_score
score = r2_score(predictions, targets).item()

And I get -inf which is definitely not what I was expecting. I was using this same data in sklearn and when calling reg.score(predictions, targets) which should calculate the same thing I think, I had no problems, sure enough.

Am I doing something wrong? Is the high dimensionality screwing it up?

-inf would be expected if the target is constant for imperfect predictions.
From the sklearn.metrics.r2_score docs:

In the particular case when y_true is constant, the score is not finite: it is either NaN (perfect predictions) or -Inf (imperfect predictions). To prevent such non-finite numbers to pollute higher-level experiments such as a grid search cross-validation, by default these cases are replaced with 1.0 (perfect predictions) or 0.0 (imperfect predictions) respectively. You can set force_finite to False to prevent this fix from happening.

Here is a code snippet showing this result:

y_true = np.full((100,), fill_value=1.)
y_pred = np.random.randn(100)

print(sklearn.metrics.r2_score(y_true, y_pred))
# 0.0

print(sklearn.metrics.r2_score(y_true, y_pred, force_finite=False))
# -inf

I don’t see a similar argument in torcheval.metrics.R2Score so check if your target could be constant.