Where is the nn.Linear cuda implementation

I cannot find the cuda implementation of nn.Linear, for forward implementation and backward implementation. Can anyone helps me with this?

1 Like

nn.Linear is using F.linear, which will call into addmm or matmul here and then deploy to the cublas method here.

1 Like

Thanks for the answer. And btw, what is the backward function/implementation for the F.linear? I see many Ops defined their related backward functions in native_functions.yaml file, but for thoes not defined , how do we check their backward implmenetation? Is there a common way?

The backward methods of the aforementioned functions are defined in the derivatives.yaml as seen here.

1 Like

Thank you very much for your answer!!! But I still have some small questions:
in https://github.com/pytorch/pytorch/blob/c1c9be16c4d0648fc134d04f30c8463575df7ada/aten/src/ATen/native/cuda/Blas.cpp I can not find function named addmm, or matmul, only can find something like, addmm_out_cuda_impl. I know they should be, close, but how they are related together?

Given that I am implementing linear layer myself, I also wonder, how can I use this in my code, which is, can I really include and run them?

Thank you!!!

Hi @Ziyu_Huang , I had the same issue so I wrote code to implement the Conv2D and Linear layers using native CUDA code here. Hopefully it will be useful for you! I only wrote code for the forward pass in CUDA but hopefully you can use that as a base to write code for backward if you need that.

1 Like