I added one adaptive pooling to my network and the training process becomes obvious 3 times slower, is this expected behavior of Adaptive Pooling ?
Which adaptive module are you using and what’s the input size? Also, could you profile your network and post the results please (http://pytorch.org/docs/master/autograd.html?highlight=profiler#profiler)? Thanks!
Hi, Thanks for the reply, the network training is only slow in the few hundreds iterations, after that, it becomes fast as not having the adaptive pooling layer.
That’s kind of weird. Perhaps disk I/O slowed things down a bit. Glad to know it works now!