I would like to build a new convolutional layer, where the addition and multiplication in this layer are all approximated(I could use accurate addition and multiplication to model that). I look at some code related to customized Conv layer but it seems that they rely on the con2d primitive which use accurate addition and multiplication. Is there any way I can implement this? Thanks!
You would have to share a bit more about what you’re trying to do to be sure, but one bridge to more bespoke convolution-like operations can be the
torch.nn.Unfold module or the corresponding
torch.nn.functional.unfold. What it does is reducing the convolution to Unfold + batch matrix multiplication with the kernel reshaped to a matrix form - the linked documentation has a demonstration.
This allows to replace the linear transformation in the convolution with other ops.
Note, though, that this is quite memory (and compute) intensive – each pixel is duplicated to all filters it appears in, but it might work for prototyping.
Thanks for your reply. What I am trying to prototype is replacing the adder/multiplier in the processor unit in CPU/GPU with approximated adders/multipliers which may have inaccurate results. The behaviors of these proposed adders/multipliers could be emulated using logic operations. So basically, I need to replace the "" and “+” in conv2d primitive with the function I wrote as follows:
y = ab+c;
-> y = approximate_add(approximate_mul(a,b),c);
I look at the unfold function, it seems that it will still need to call the primitive which will use the accurate adder/multiplier if I understand it correctly. Does this mean I may need to modify the source code for conv2d?
Hi, Can you find any solution about approximate adder