Learning speed according to the number of parameters

(jamjam) #1

I removed FC LAYER and replaced it with global average pooling to reduce the number of parameters in VGG16. Actually, the parameter was 10 times different. But the actual speed of learning doesn’t seem to have improved much. What is the reason?


Did you profile your code to see where the bottleneck is?
It might be that e.g. your data loading is the bottleneck so that reducing the operations in your model won’t change anything.

Also, did you time the linear vs. pooling layer properly and made sure the pooling layer is indeed faster?