Train loss and validation loss is almost same

The training loss and validation loss of my model is almost same from the second epoch. Then the training and validation loss keep reducing, but both the losses remain almost same as well.
Does this mean the model is overfitting?