Can I use default backward of tensor operation when writing custom CUDA layer?

Hello,
I try to implement my custom layer.
In this custom layer, many basic tensor operations are included such as dot product, cross product.

I want to exploit these basic operations implemented in pytorch.

##Question
Is it possible to import tensor operations which are default in pytorch?
If I cannot import them, where can I find these basic tensor operation codes in pytorch repository?

I’m not sure what your actual use case is, but you could directly use torch methods to write a new layer.
As long as you are using differentiable operations, Autograd will capture the computation graph and will be able to compute the backward pass.

If you are breaking the computation graph for some reason, you would need to implement the backward method manually as described here.

Are you writing a custom CUDA kernel or should the layer just be executable on the GPU?

I already tested my new layer with tensor operation in Python. I didnt’ have to implement the backward function since the layer is composed of basic tensor operations.
It works well, but too slow.

So, I want to boost the layer by writing it in CUDA.

If you want to write a custom CUDA extension, have a look at this tutorial.