Training works fine, and the pictures saved during training progress are looking good. But I’m having problems with testing the network. The predicted Images differ are very different from the training pictures. Does anyone of you guys experienced same problems?
Here is my code for testing the model. Maybe I’m missing something:
Just made a quick test and saved some images during trying via eval() mode. The images from eval() look the same as the images saved with eval() mode of
Thanks for the test.
In that case I would recommend to use a fixed input (sample the data once and save it) and then compare the outputs layer by layer using your model in your training and validation script.
Something apparently went wrong, if you are using the same preprocessing and the state_dict was successfully loaded.
You could create a suitable input e.g. by using x = torch.randn(your_shape) and save it via torch.save.
To compare the activations you could use forward hooks as described here.
I save the model as checkpoints during training, and all predictions, also from the first saved model, look similar to the last saved model. I think maybe there is problem with saving the model. But I dont think I’m doing anything wrong here: