Loss with two different ranges but the same accuracy

I have a question. I am trying to figure out the answer. I really appreciate you can help me.
I have a recommendation system dataset (including numerical and categorical columns) and I train a neural network on it. The loss is close to 0.57. The loss function is binary cross entropy.

When I add gaussian noise to numerical columns, the loss shows a value less than zero like 0.57 or a big value like 25.097, but in both of them the accuracy is 74.030.
I am wondering why I have loss in two different ranges with the same the accuracy. I don’t know why it is happening. The random noise can cause change in result, but I don’t get the point of changing the the range of loss.