How to avoid re-decoding for multiple inputs that have shared prefixes

My question is about Huggingface library, but it may have an answer at Pytorch level.
I have a Huggingface pipeline for text generation (intentionally avoiding the details):

llm = pipeline('text-generation', model='gpt2-xl')

I would like to use the model above to do generation for a long list of input texts (for each entry individually). The data has a specific property, and that is all the entries of the list have an identical prefix string. So to be more specific, assume the simplified example below:

INSERT A LONG PREFIX HERE first piece of text
INSERT A LONG PREFIX HERE second piece of text
INSERT A LONG PREFIX HERE third piece of text
INSERT A LONG PREFIX HERE fourth piece of text
.
.
.

The naive way to do generation for each entry is to use a for loop and input each piece separately. But one can also think of decoding the prefix once, and then to re-use the cached state of the model to only decode the distinct part of each entry to get the output. I wonder if there is a straightforward way to do this?