Test loss climbing along with accuracy

my train loss decreases and test accuracy climbing, all normal.
my test loss is first decreasing and later climbing, so it looks like a second degree polynomial.
i’ve read online some explanations that it could be a sign of overfitting or low confidence of the network.
the thing is - the accuracy keeps climbing, so i’m not certain about the overfitting.
i understand this can happen in general, but it also happens when i try the basic classification tutorial:
https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html

it uses cifar10 and resnet18, so it’s supposed to be very simple easy to learn and i wouldn’t expect this behaviour in that. does it make sense?