How to change the output size of the model when the batch size is specified?

Hi I’m trying to calculate distance between embedding vectors which is extracted from the model. I’m working on image retrieval problem.
when I work with keras Imagedatagenetor function, even when the batch size is specified, the output shape of the model is for example (number of query images or reference images, num_classes)

but when I work with pytorch, when using dataloader with batchsize say 32, the output size of the model becomes (32, num_classes). The last layer is linear layer.

I need shape of (number of query images or reference images, num_classes) to do the samething as I did with keras because I need embedding vectors each to compute the distance.
Any advice please? thanks

Have you tried something like this?

outputs = []
for input_batch in dataloader:
      output_batch = model(input_batch) #torch Tensor of shape (32, num_classes)
      outputs.append(output_batch)
print(outputs.shape) #Torch Tensor of shape (number of query images, num_classes)

isn’t the output of the model the shape of tensor? error says that there are no attribute ‘append’. how can I solve this??

I simply created a list and appended each output batch to my list. Can you share your code here?