I want to use a new data representation instead of float for fine-tuning/testing a model (e.g., DNN or LLM) in Pytorch. The basic operations (add/sub/multiply/division) in my data type is different from floating point. My question is if it is possible to implement these operations (+,-,*,/) and force all of functions in Pytorch (e.g., torch.add(), torch.sum(), torch.nn.Linear(), conv2d, etc.) to use my basic arithmetic implementation? If so, could you please guide me how can I do it?
Because I think otherwise it takes so much time and effort; first, I have to find which functions my model calls (which I dont know how to do it) and, then, I have to replace them one by one. This becomes complicated for a large model.
I found this link from Pytorch that shows how to extend pytorch. But it seems that it is not comprehensive enough to answer my question.
Thank you very much!