Turning images into vectors using resnet


I would like to take my images and turn them into vectors consisting of 1000 features. I was thinking of using resnet50 to do this. Can I run a model through without a loss criterion? Is that the proper way to do what I intend to do?

Is there a good way to append those vectors to a series or numpy array?

I don’t know for what exact task you want to extract a feature vector for but, in theory, you can obviously use a pre-trained resnet to extract a feature vector of any size from an image. Keep in mind that such pre-trained resnets were specifically trained for image classification and so they will extract features with the sole purpose of image classification. However, you can easily circumvent this by doing transfer learning and re-training the last few fully connected layers according to your task.

Thank you. I would like to join these features with tabular data to predict an ultimate binary outcome, in the same light as the Uber Ludwig model. Would I just run my model without calculating a loss?

Up to you and your understanding of the problem. Run some experiments with and without loss and compare the results

Fair enough. Thank you.