Train loss and validation loss is almost same

The training loss and validation loss of my model is almost same from the second epoch. Then the training and validation loss keep reducing, but both the losses remain almost same as well.
Does this mean the model is overfitting?

No, overfitting would be visible as a gap between the training and validation losses, where the training loss would be lower.