Due to GPU capacity, I only available to use batch size = 1.
So I tried resize image and increase batch size 1 to 4.
Then training loss is not reduced at all.
Is it possible? My task is quite similar to regression.
Or what is possible problem that I can solve.
batch size = 1 Loss graph(up: training below: validation )
batch size = 4 + resize image
network’s output is always same…
How small are you images after resizing them?
The Inception model was trained on images of
299x299, so too small images might get a bad performance.
It was smaller than 299*299.
I will try larger one!
Thanks a lot.
May I ask one more question?
I think my network is overfit to the data.
So I try to reduce model size.
My model is i3d .
It contains several consecutive inception modules.
Is it ok to remove one or two of the five consecutive modules?
Or, can you suggest other ways to reduce the size of your i3d model?
I am very honored to receive your reply.
Do you refer to “Inflated 3D Convnets” with i3d?
I’m really unsure how to slim down your model, but your idea sounds reasonable.
Changing the number of filters might be another valid approach.
I will try it.
I have two more question.
- batch size = 2 is harder to training than batch size = 1?
- My training loss = 0.002 but If I test the model with training data. loss increases to 0.2.
I suspect that dropout does not work in model.eval (), so the output value of the model has changed, or the number of epochs is insufficient.
Please help me!