Performance issues in Octave Convolution ResNet

Hello pytorch community,

I’ve been working on an implementation of Octave Convolution in pytorch, and now I’m trying to benchmark it.

This new octave convolution layer is supposed to be a “drop-in” replacement for nn.Conv2d and according to the paper it should have better performance, both in accuracy and speed, than its vanilla counterpart.

In my implementation, I benchmarked the convolutions individually and the OctConv2d is indeed faster than nn.Conv2d. However, the same is not true for the ResNet implementation, which I modified from the original torchvision implementation.

The benchmark code can be found here

Any clue of why this may be?

Thanks!
Miguel

Hi

Were you able to resolve this?

I think with CUDA, the timings being measured by your code might not be accurate:
(https://github.com/braincreators/octconv/blob/oct-resnet152/benchmarks/benchmark.py#L49)

You would either need cuda synchronize events or the autograd profiler (How to measure time in PyTorch)

Hii
Can u please mention the challenges phased by octave convolution

Hii
can u please mention what is the methodology used for implementing octave convolution

You can check the implementation in the repository linked in the first post

Can u share any proposed diagram for octave convolution

it is also in the repository, but you should read the paper for that.

u didn’t mentioned any proposed diagram in the link

Can u please send the paper regarding octaveconvolution