I want to replicate the LNS-Madam paper (https://arxiv.org/pdf/2106.13914.pdf) that is using a logarithmic number system for DNN. Hence, I need to implement all the arithmetic operations in LNS format. For example, multiplications turn to addition and addition itself will be more like a lookup table.
In the paper, It is mentioned that "To evaluate accuracy, we simulate LNS-Madam using a PyTorch-based neural network quantization library that implements a set of common neural network layers (e.g., convolution, fully connected) for training and inference in both full and quantized modes . The baseline library supports integer quantization in a fixed-point number system, and we further extend it to support LNS. ". Does this extension need to only involve the basic arithmetic operations (and then all of the operations/kernels in Pytorch will automatically adapt that)? Or do I need to reimplement all the kernels? In either case, I would appreciate any pointers on how to get started with this.