Hi I’m trying to calculate distance between embedding vectors which is extracted from the model. I’m working on image retrieval problem.
when I work with keras Imagedatagenetor function, even when the batch size is specified, the output shape of the model is for example (number of query images or reference images, num_classes)
but when I work with pytorch, when using dataloader with batchsize say 32, the output size of the model becomes (32, num_classes). The last layer is linear layer.
I need shape of (number of query images or reference images, num_classes) to do the samething as I did with keras because I need embedding vectors each to compute the distance.
Any advice please? thanks