How to get pooler_output from fine tuned BERT?

Hi,

I have fine-tuned BERT on my text for multiclass classification with 11 classes and saved the models for five epochs.
I have done BERT tokenizer and encoding data, and used BERT pretrained model as following:

model = BertForSequenceClassification.from_pretrained("bert-base-uncased",
                                                      num_labels=11,
                                                      output_attentions=False,
                                                      output_hidden_states=False)```
## Then for loading my model I used:

model_ft = BertForSequenceClassification.from_pretrained("bert-base-uncased",
                                                      num_labels=11,
                                                      output_attentions=False,
                                                      output_hidden_states=False)
model_ft.load_state_dict(torch.load('finetuned_BERT_epoch_5.model', map_location=torch.device('cpu')))  ```

##And for the prediction I have done:


model_ft.eval()
with torch.no_grad():
    output = model_ft(input_ids_val,attention_masks_val)
   

However, my output is: output[0].size() = (my_validation size, 11) and out_put[1] is empty.
I also tried output_hidden_states=True but still I am getting a tuple ((my_validation size, 11, empty), tuple((tensr), (tesnor)))

So I have two questions:

  1. I think my output is supposed to be a tuple (last_hidden_state, pooler_output). Given that, did I miss any step that I am not receiving the pooler_output?

  2. How can I pull out the pooler layer to use its result of that as an entire sequence representation and use it for other tasks?

I appreciate any help!!