Is there a way to implement separable conv as a single layer in pytorch. Does pytorch have that node?
Assuming you are looking for a layer applying a depthwise convolution followed by a pointwise one, you could simply wrap both in a custom
nn.Module to create a single layer.
That can be done but internally they are two different function call.Is there any possibility that they can be called using one single call?
No, I don’t believe any backend fuser is currently able to create a single kernel for them.
Is there a way to do that ?
Yes, you could write a custom CUDA extension as described here with your own kernel.