I’m a new CNN learner. I have some videos. each video represents a car. I extract frame images of car from each video and then also extract car’s voice spectrum images from each video. Let say I have 5 types of car. I have two main distinct dataset folders. One main folder contains car pictures. Other main folder contains voice spectrum images of each car. Each of main folders contain five subfolders to distinguish types of cars.
My CNN network will have two input image -one is image of car, other one is voice spectrum of the car-. The images will be passed paralelly through VGG16 feature extraction layers and then will be flattened -combined- and be classified. Ps: I guess, in order to use the VGG16 pre trained weighs, I should use 3 channels. So, I have two paralel VGG16 feature extractions layers. The feature extraction layers will be combined at classification part.
My question is that… How should I handle the dataloader and dataset. Should i have two dataloader for each input image. How should i handle the dataset to pass it to my cnn model?
Do you guys have seen similar basic classification project in the forum or somewhere else? I’ve searched many websites before posting this question. But, could not understand clearly.
RGB stands for Red, Blue and Green. Each have their own channel and are independent of one another. You could say an RGB image is 3 images stacked into one. The size is (3, H, W).
The point I’m getting at is passing in n images as channels is a perfectly valid way to feed data into a model.
Thank you so much for you quick answer! I understand your point. Sorry, I forgot to mention that the two images will be passed through VGG16 Pre trained network seperately. As far as, I know in order to use pre-trained weights, ı should not change the vgg feature extraction layers. It takes 3 channels at first layer. So I guess i have to use the images seperately. I should create a model that takes two input parallely. But how can i use the dataloader that points to dataset folder. Should i have two dataset loader?
If you’re using a pretrained model, then you’re likely retraining the final output layer. In fact, you can do the same with the first input layer. Here’s tutorial on how to do that: