How can I implement a Convolution Layer on 1-bit weight?

Currently, Convolution layer are all based on float32 operation, which needs lots of multiplications.

I trained a binary network which weights are all {-1, 1}, it could be calculated use xnor and bitcount, I wonder what document should I look or which file should I modify to change the basic convolution operation fit for 1-bit convolution?

Should I change pytorch code and recompile or write another cpp or cu file to support bit operation? In order to accelerate.

Hi,

You will need to write your custom cpp/cu code to support such thing especially if your input is float32.
Note that float64/32/16 convolutions are heavily optimized by cudnn and you will need a very high quality implementation to beat these.

Thank you for your timely reply!

I wonder if there is a drop on accuracy if I use float16 rather than float32, will the performance drops a lot?

If I’m going to implement it, according to Bitwise-operations-on-cuda. Two ways are listed by cuda extension or cupy.

Shouldn’t be the case