Validation loss not following training loss on same sample

I am trying to overfit my model on 2 samples.
I run the training until training loss is 0 and then I run validation on the same 2 samples.
But the validation loss is higher, as can be seen here:

Why is the validation loss not 0 as well and why is it going in the opposite direction ?

Is the output used for validation loss calculation the same as the one for training loss?

I ask this because I wonder whether the model’s behavior is up to the value of model.training.

Yes it’s exactly the same, and yes I do call model.eval() for validation.

It sounds weird. Could you share a minimal reproducible code?