Text classification using ensemble of transformers

I am trying to do text classification using ensemble of transformers. My dataset contains about 64000 tweets. For each tweet i got prediction as follows, So for each tweet each transformer model is giving probabilities whether a tweet a sarcastic or not. As you can see below there are 10 probabilities for each tweet.

Tweet 1 : tensor([0.4219, 0.5781, 0.5237, 0.4763, 0.4977, 0.5023, 0.4618, 0.5382, 0.5324,0.4676])
Tweet 2 : tensor([0.4295, 0.5705, 0.4761, 0.5239, 0.4859, 0.5141, 0.4979, 0.5021, 0.5025,0.4975])
Tweet 3 : tensor([0.4000, 0.6000, 0.4832, 0.5168, 0.4932, 0.5068, 0.5023, 0.4977, 0.4939,0.5061])
.
.
.
Tweet 64000: tensor([0.4213, 0.5787, 0.4904, 0.5096, 0.4870, 0.5130, 0.5084, 0.4916, 0.4900,0.5100])

I need to feed in these probabilities to another neural network or apply logistic regression so that i can get the final prediction for each row. I am not sure how i can achieve these or whether it is actually possible to do. Can you please look into it and provide some insights?

Your answer will be much appreciated.

Just treat the values as feature as training set and train the model against the golden value.

Thank you so much for your reply. Can you please elaborate/pseudocode how i can deal with 64000 rows with 10 probabilites /10 columns? Let say i want to feed these probabilities to a neural network, how do i write a class for my expected output? Please note that i am trying to ensemble 5 transformers, so thats why i got 10 probabilities.