Detect tiny object from the larger image

I have images of the size ~7000x~5000 and the object size is about 64x64 pixel. I want to apply the object detection model to these images. What will be the best approach to do it? As the object detection model does not support this size also I think it will not be a good idea to resize the image in a smaller size. Can I use ~7000x~5000 as an input size to the object detection model or need to divide the
image into patches?

here is the example of image: https://drive.google.com/file/d/1XVco-j9bEO5MoSh-T1p3wYPiRKm7R1o0/view?usp=sharing

AFAIK if you use Mask R-CNN or Faster R-CNN to detect the small objects, the big images will internally automatically be resized. I saw some min_size(=800) and max_size(=1333) parameters down in the base classes that will make the decision when to scale the image. Unfortunately I did not find a way to change these min_size and max_size parameters without duplicating half of the PyTorch code.
Therefore I think the best way would be to split your image into slightly overlapping patches.
You might be able to speed this up by skipping empty patches with very little pixel contrast (std() of pixels very low).
Since I am very new here I think some pro’s should confirm my advise…

1 Like

Looking at your image again, I think you should not use any detection model.
Your patterns are so tiny, they are more “spots” than patterns.

Instead I propose to create a small set of spot detection kernels and run them via convolution over the image.
A typical spot detection kernel would be a matrix like this:
0, 0, -1, 0, 0
0, -1, 4, -1, 0
-1, 4, 9, 4, -1
0, -1, 4, -1, 0
0, 0, -1, 0, 0
This example kernel should spike when it finds a spot with 2-3 pixels in diameter.
You would need kernels for different spot sizes…

A laplace filter from OpenCV could do the same trick…

1 Like