cycleGAN inference pictures differ from images generated in training

Hi,

I’m using a cycle GAN to transfer image styles. The implementation is based on this repo: https://github.com/eriklindernoren/PyTorch-GAN/blob/master/implementations/cyclegan/cyclegan.py

Training works fine, and the pictures saved during training progress are looking good. But I’m having problems with testing the network. The predicted Images differ are very different from the training pictures. Does anyone of you guys experienced same problems?

Here is my code for testing the model. Maybe I’m missing something:

model_instance = Model(settings)
model_instance.load_state(model_path)
ds = Dataset(path_signal_test_csv, path_target_test_csv)
predictionSignal, predictionTarget, recPredSignal, recPredTarget = model_instance.predict(signal, target)


def predict(self, imageA, imageB):
        
        imageA = imageA.to(device=self.device)
        imageB = imageB.to(device=self.device)

        self.G_AB.eval()
        self.G_BA.eval()
        
        with torch.no_grad():
            predictionA = self.G_AB(imageA).to('cuda')
            predictionB = self.G_BA(imageB).to('cuda')

            recPredA = self.G_BA(predictionA).to('cuda')
            recPredB = self.G_AB(predictionB).to('cuda')

return predictionA, predictionB, recPredA, recPredB

Are you preprocessing the images during inference in the same way as during training?

Yes I do.

Realy dont know what is happing.

The inference from G_AB looks kind of good, but its recoverd image looks realy wrong.

Are you also saving some pictures in eval() mode during your training and could compare these to your current output?

Just made a quick test and saved some images during trying via eval() mode. The images from eval() look the same as the images saved with eval() mode of

Thanks for the test.
In that case I would recommend to use a fixed input (sample the data once and save it) and then compare the outputs layer by layer using your model in your training and validation script.
Something apparently went wrong, if you are using the same preprocessing and the state_dict was successfully loaded.

Could you explain how one creates a fixed input and how to compare the outputs layer by layer?

You could create a suitable input e.g. by using x = torch.randn(your_shape) and save it via torch.save.
To compare the activations you could use forward hooks as described here.

Thank you, I’ll try that.

I save the model as checkpoints during training, and all predictions, also from the first saved model, look similar to the last saved model. I think maybe there is problem with saving the model. But I dont think I’m doing anything wrong here:

def save_state(self, path_save='/Model'):
        t = time.localtime()
        torch.save(self.G_AB.state_dict(), self.run_dir+'/'+path_save+'/'+time.strftime("%H-%M-%S_G_AB.pt", t))
        torch.save(self.G_BA.state_dict(), self.run_dir+'/'+path_save+'/'+time.strftime("%H-%M-%S_G_BA.pt", t))

Thanks for your help :slight_smile:.