Validaion loss and convergence

i have a model, that for the same parameters the loss validation sometimes convergent and sometimes not.
does it mean anything?

Please elaborate on what do you exactly mean by not converging: do you mean that your trainging loss goes down while your validation loss goes up, or that both of your losses do not always go down to the same value in the same number of batches?

Also, are you training “usual” network, or a gan?

2 cases:

  1. the train goes down, the accuracy train goes to 1, and the validation goes down too, and the validation accuracy goes to 1 too

  2. he train goes down, the accuracy train goes to 1, and the validation loss goes up

those two cases happening for the same parameters

So, the case 1 looks like a normal training process, and case 2 looks like overfitting: model learns your training dataset, but not learning general relationships in the input data (Its like instead of prepairing for the test, you memorized the answers)

Try to simplify your model, giving it less neurons, preventing it from memorising test data, and see if problem persists. Also, some graphs of loss and accuracies would be nice