Fixed value output – mo improbing / moving but just sometimes

I have a model which resulted in very good results.
I saved all the parameters (regressor, vgg16-ft_extractor, regressor, optimizer) after initialisation.

If I ran the model for training again, or change some Hyperparamters in most of the cases a similar result is the outcome (as expected).
(I initialize the model with the parameters of the the successful run; as described above).

But sometimes it doesnt do nothing. The (bad) training/val results of the epochs dont move at all.
The estimation for the (lets say distance) is always fixed (one value for all predictions in every epoch).
– The strange thing is, that the same *.py script and initialized parameters mostly works – without any problems…

Any idea why the network is behaving like this?
Is there something like a “cuda”-cache which I have to reset manually?

Example:

Target Distance   : tensor([[ 3.3200],
        [17.7000],
        [28.5000],
        [43.3200]], device='cuda:0')
Predicted Distance: tensor([[27.3686],
        [27.3686],
        [27.3686],
        [27.3686]], device='cuda:0')

It could be possible that there is some nondeterminism in your setup and that some initializations perform poorly compared to others. To rule this out, could you try setting a manual seed and consulting
Reproducibility — PyTorch 2.0 documentation ?