Does pooling always raise the accuracy of a CNN?

Pooling is alway presented an an accepted part of a CNN architecture. However, are there any cases where pooling might be detrimental to the performance? The obvious advantages to pooling are usually presented as minimising data, but in cases where memory is no problem, might there be advantages not to pool?

I don’t think the memory and compute reductions are the only advantage of using pooling layers and would claim reducing the dimensionality of the intermediate activations should also help the classifier generalize. However, my point of view might come from a more “classical” ML view, so I would be interested to hear from others, too.

Well many thanks for your view, but not much else coming through on this one. I guess people just follow the tutorials without really asking too many questions. I’ll continue to pursue and will post when i have more information. Cheers, NZ1