Model Convergence Issue in Custom Training Loop

There are issues with model convergence when using a custom training loop in PyTorch. The loss stagnates after a few epochs. The optimizer is Adam with a learning rate of 0.001. Issue persists even after adjusting the learning rate and changing the network architecture. Any advice on what might be causing this and potential solutions?

If you can post here a snippet of your code it will be easier to give you a solution.