Clipping parameter values during training/ testing

I’m working on GAN, There are upper and lower bounds in the pixel values of the images. I have applied clipping and normalization during the training, Should I want to clip or normalize them during the testing as well?

So it is hard to comment on very concrete situations from somewhat more general descriptions, but here are some thoughts:

In general, it is a good idea to keep the difference between training and testing smallish.

In practice, the two most notable exceptions are

  • dropout, which has an interpretation as taking a mean during evaluation where during training we sample from a distribution,
  • and batch norm, where we don’t like the discrepancy between training and testing, so for example transformers use layer norm which, among other things, eliminates the train-test discrepancy even if it make evaluation quite more costly.

For other things, like spectral norm, we do don’t do all the compute as during training, but make sure the same weight is used in evaluation as in training.

From your description, it would sound like you are more in the spectral norm-like case. If you actually clipped/normalized weights, they might keep that so you would not need to do it over and over again, if you worked on the activations (intermediate results of the computation), you would keep it.

Best regards

Thomas

Thank you for your suggestions. I will try them…