Is it possible to train a generative model on features other than text?

Sorry if the title isn’t very descriptive.

Say I have a model such as GPT or T5 and I make it generate some text, now I want to do something with this text, for example, count how many title case words are in it, and have a label that says 5.

Now I want to train the generative model on the loss between the number of title case words of the generated text and my label (5).

Is it possible for me to create a loss function that relies on functions (and probably data) outside of pytorch’s scope? Or is this example of mine a possible scenario?


The loss function should be differentiable.
See if the loss function you come up with (looks like a count function in this case) is differentiable or not.

But it necessarily has to be implemented using PyTorch functions, right?

Say I have functions like this:

def count_title_case(text):
    return len([word for word in text.split() if word.istitle()])

def loss(output, target):
    output_count = count_title_case(text)
    loss = torch.mean((output_count - target)**2)
    return loss

Would they work considering I’m using another library to extract information from the output?

It doesn’t matter if it’s using Pytorch functions or not. The functions you plan to use should work with tensors and should be differentiable.