Hello I have a dataset that is binary, but each example within the dataset has multiple images. Another way of explaining it is, 1 image but sliced up into multiple images(to get rid of white space). How do I go about putting this into one tensor? Would i just combined them all using numpy? Or are you supposed to use a different dimension, where all 60 images combined would equal 1 label.
To explain this better you would have image 1 that is label 0. This image is split into 60 slices to get rid of white space. How do I make all 60 slices equal the same label, because all 60 of the slices might not contain what i’m looking for.
I don’t completely understand the use case and what “white spaces” refers to in your example.
How did you split the image into 60 slices and what does each slice represent?