I am not going to claim I know what I am doing here , but I think you can guide yourself with the github repository to see how you can implement the GPT2 class directly.
Here is a small example to use GPT2 directly and not through their pipeline. This way you can feed tensors directly to the model. Here I am using the GPT that outputs the raw hidden-states without a specific head on top. If you parse through the source code, you will see more with language modeling for single or multiple choice or sequence classification for example.
from transformers.models.gpt2 import GPT2Model, GPT2Config
config = GPT2Config()
model = GPT2Model(config)
print(model.config)
sentence = torch.rand(2, 20, 768)
output = model(inputs_embeds=sentence)
Hope this helps a little bit more!