No Speedup with Depthwise Convolutions


#1

I was experimenting with depthwise convolutions and noticed that I’m not seeing any performance increase over standard convolutions. I tried a few different MobileNet architectures to look into this, but for the sake of repeatability I’ll reference this script which is a basic implementation of a MobileNet model: https://github.com/marvis/pytorch-mobilenet/blob/master/benchmark.py

If I change the script to use groups=1, my runtime for a forward pass does not change at all, neither faster nor slower. GPU runtime on a forward pass is ~15ms and CPU runtime on a forward pass is ~250ms.

OS: Windows 10
GPU: GTX 1080
PyTorch: 1.0.1 (previously had 1.0.0 but upgraded to see if it made a difference)
CUDA: 9.0
cudnn: 7.4.1 (previously had 7.0.4 but upgraded to see if it made a difference)


(Nam Vo) #2

I’m not sure if groups has anything to do with Depthwise Convolutions

After changing groups=1, what kind of increase/decrease would you expect?


#3

I would expect the execution to be slower when groups=1 (or more specifically, I would expect it to be faster when groups is equal to the number of input channels). The nn.Conv2d docs page claims that is how you use depthwise convolutions in PyTorch.


(Nam Vo) #4

How much slower are you expecting though?
The speed could be affect by other factors such as other layers or batch size, such that the difference is insignificant. Maybe it was 5% slower and you didn’t measure it correctly.


#5

It should be much faster to use depthwise convolutions if I’m implementing it properly. See this GitHub issue for example. Any time the number of groups is set equal to the number of input channels, that layer executes 10-100x faster. That should be apparent even when using a simple timing mechanism such as time.time().

Note that in the above link you’re looking for any lines that say Group=1024, since that was the size of their input.


(Nam Vo) #6

Doesn’t that link shows insignificant difference between group 1 and 2, same as what you’re doing here? You can try groups=1024 and see if it’s faster


#7

I had planned on using depthwise convolutions on a future network, so was curious about this as well.

I forked that code and made the benchmark a little more extensive at https://github.com/DNGros/pytorch-mobilenet/blob/master/benchmark.py to cover different batch sizes, run with and without CUDA, and do multiple trials rather than just one.

Pytorch v1.0.0
GPU name GeForce GTX 745
-----CUDA True-----
--batch size = 1
resnet18        0.05661s
alexnet         0.01414s
vgg16           0.01525s
/usr/local/lib/python3.6/dist-packages/torchvision/models/squeezenet.py:94: UserWarning: nn.init.kaiming_uniform is now deprecated in favor of nn.init.kaiming_uniform_.
  init.kaiming_uniform(m.weight.data)
/usr/local/lib/python3.6/dist-packages/torchvision/models/squeezenet.py:92: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
  init.normal(m.weight.data, mean=0.0, std=0.01)
squeezenet1_0   0.02958s
mobilenet       0.04543s
mobilenet one group 0.10287s
mobilenet four group 0.11901s
--batch size = 4
resnet18        0.03556s
alexnet         0.00807s
vgg16           0.01727s
squeezenet1_0   0.04847s
mobilenet       0.06989s
mobilenet one group 0.29338s
mobilenet four group 0.28810s
--batch size = 32
resnet18        0.18783s
alexnet         0.00866s
vgg16           0.02191s
squeezenet1_0   0.38219s
mobilenet       0.53036s
mobilenet one group 2.59376s
mobilenet four group 2.26333s
-----CUDA False-----
--batch size = 1
resnet18        1.14055s
alexnet         0.37858s
vgg16           2.17955s
squeezenet1_0   0.46393s
mobilenet       1.27138s
mobilenet one group 1.59579s
mobilenet four group 1.26217s
--batch size = 4
resnet18        2.22609s
alexnet         0.66891s
vgg16           6.26908s
squeezenet1_0   1.56296s
mobilenet       2.62577s
mobilenet one group 3.63102s
mobilenet four group 2.69251s
--batch size = 32
resnet18        12.45708s
alexnet         2.78703s
vgg16           45.08901s
squeezenet1_0   9.72596s
mobilenet       15.53032s
mobilenet one group 22.75712s
mobilenet four group 17.14482s

Note, this on a pretty old GPU (GTX 745) and CUDA (8.0). You might want to try running on this as well and see what you get on your machine.

I definitely see an improvement when groups=input_channels compared to one group, but is at best maybe 5x for larger batch sizes on GPU, and only maybe a 1.5x improvement on CPU. That speedup certainly respectable, but I somewhat expected a larger speedup given the much greater reduction in params and ops depthwise convolutions should have given. Not sure if this is expected and matching that of other frameworks, or if this is a pytorch issue.


(Thomas V) #8

Note that you may be comparing different implementations. With

m1 = torch.nn.Conv1d(256,256,3,groups=1, bias=False).cuda()
m2 = torch.nn.Conv1d(256,256,3,groups=256, bias=False).cuda()
a = torch.randn(1,256,5, device='cuda')
b1 = m1(a)
b2 = m2(a)

I get:

In: b1.grad_fn
Out: <SqueezeBackward1 at 0x7f0f35be90b8>
In: b2.grad_fn
Out: <SqueezeBackward1 at 0x7f0ed1ec2c50>
In: b2.grad_fn.next_functions
Out: ((<ThnnConvDepthwise2DBackward at 0x7f0ed1f007f0>, 0),)
In: b1.grad_fn.next_functions
Out: ((<CudnnConvolutionBackward at 0x7f0ed1ebf780>, 0),)

So you would be comparing the non-grouped CuDNN convolution with the “native” fallback TH(Cu)NN in the grouped case (which isn’t - or at least wasn’t - supported by CuDNN so PyTorch needs to fall back to it’s own implementation). Now I didn’t look in great detail at the Cuda THNN implementation, but when I ported libtorch to Android, the CPU THNN convolution implementation involved unfold->matrix multiplication->fold and was hugely inefficient.
Of course, it would be highly desirable to have a more efficient native implementation, but it is quite a bit of work (e.g. for batch norm I managed to get the wall clock time on the GTX1080Ti close to CuDNN’s but that was a lot easier than I imagine convolutions to be - with things like using FFT sometimes and sometimes not etc.).

Best regards

Thomas