How to train networks with loss feedback from other places?

Say I have a NN that generates images. The input of the NN is random noise and the category of the image I want to generate. Now, I have a different program that tells what category of the image actually gets generated or how far the generated image is from the desired category. Can I use this information to train the NN in some way?

(It’s not GAN. It’s different in two ways. There are no “real” inputs compared to generated ones. The discriminator (or evaluator) is not a NN and has nothing to do with tensors.)

You could try to use the output of your “program” e.g. as a target for an autoencoder-like architecture, where the label might be fed to a side branch coming from the latent tensor, while the generator output is trained with e.g. nn.MSELoss.

I assume your program cannot generate valid gradients, which could be fed to the generator?

Thank you for your reply!

Yes, my outside program cannot generate valid gradients. Instead, it only spits out numbers. I decided to connect a second NN after the first NN to approximate my outside program. If the approximation is good enough, then I can train my original NN with the gradients provided by the second NN.