Classification on large images

Hello all !
I would a code of image classification which take at the input of the neural network images of 2000x1500 pixels because all the codes I have found take only 224x224 size images and I need very little details on all the image.
Thank you
Best regards

A lot of torchvision.models (and I think all classification models) accept variable spatial input shapes.
Assuming your device has enough memory to train the desired model using the increased spatial input shape, you should be able to directly pass your images to the model and train it.

I don’t see any, I talk the size of the image in the input of neural network. A resizing of images before is commonly done. If you have one for this which take account of very imbalanced classes it interests me

my GPU has 24 GB of RAM so it’s sufficient, I suppose

I’m not sure how to understand this.
As described, you can just pass larger images to these models:

model = models.resnet18()

x = torch.randn(1, 3, 224, 224)
out = model(x)

x = torch.randn(1, 3, 2000, 1500)
out = model(x)

and train them with this data.

for example with this tutorial : what I have to change to fit neural network with 2000x1500 pixels images ?

I have only to delete these lines ?

In [12]:

    transforms.Resize((224, 224)),

other question in this code what is the goal of the lines :
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ?

in this code too : GitHub - zhmiao/OpenLongTailRecognition-OLTR: Pytorch implementation for "Large-Scale Long-Tailed Recognition in an Open World" (CVPR 2019 ORAL)

This could be the case, so did you try it?

This code normalizes the data by subtracting the mean and dividing by the std as described in the docs.

No I have not tried it
But there will be not gestion of imbalanced classes if I do this isn’t it ?
Could you look at the code and tell me please ?