Model not training after revisiting in a few weeks

I created a CNN implementation a few months ago and it worked great on different losses (Dice, CE, Focal). The iou and F1 scores would turn out great in just 3-5 epochs. However, I revisited it now a few weeks later and tried to re-train it on the same data but it’s not training. I initially tried to train it using Dice Loss (as it gave the best results) but it gave the no_grad error - something I had never faced before. I set requires_grad to True and it stopped giving the error but now it doesn’t really train. The values just hop around a bit but stay the same more or less over 10s of epochs. I tried it with the other losses too and they’re not training either. I have not changed the code base in any way. I have however done all this using colab. Would version changes or something along the lines contribute to this? Would really appreciate any help. I’ve been stuck on this for a while now and it’s incredibly frustrating.

This sounds a bit dangerous as I guess you might have set the requires_grad attribute of the loss to True in order to avoid the “no grad error”?
If so, your computation graph would still be detached and you would just mask the error.

Could you post the model definition as well as the loss function, which creates the issue, please?