Hello pytorch community,
This new octave convolution layer is supposed to be a “drop-in” replacement for nn.Conv2d and according to the paper it should have better performance, both in accuracy and speed, than its vanilla counterpart.
In my implementation, I benchmarked the convolutions individually and the
OctConv2d is indeed faster than
nn.Conv2d. However, the same is not true for the ResNet implementation, which I modified from the original torchvision implementation.
The benchmark code can be found here
Any clue of why this may be?