Validation loss stays the same (as well as the training)

I am training a deep CNN based model and my validation loss is always in the same range(5.81 to 5.84). I am using Root Mean Square Loss (RMSE) as the problem is of regression and implementing the U-Net architecture. The problem is

  1. Increasing or decreasing the learning rate is doing nothing
  2. Making the architecture deeper is doing nothing
  3. Incresing the channels/features is doing nothing
  4. Changing the training and validation splits is doing nothing.

I have tried everything listed above but none of them seems to decrease the validation loss. The training loss is also in the same range and stays almost the same too (a little bit more fluctuation but almost the same). What should I do?

Try to overfit a small dataset, e.g. just 10 samples, by playing around with the hyperparameters, architecture etc.
Once your model is able to do so, try to scale up the use case again by using more data.