[Contributors Welcome] Add missing torch::nn layers in C++ API

Linear || Containers

1 Like

I’d love to start with ConvTranspose 1D.

1 Like

@yf225 @ptrblck I would love to start with Non-linear activations: ReLU, Tanh, Sigmoid and maybe even MultiheadAttention :smiley:

1 Like

@yf225 Interested in Convolution or Vision layers.

1 Like

@yf225 I’m interested in implementing the mentioned Distance functions :slightly_smiling_face:.

1 Like

I can work on linear layers

Interested in AdaptiveMaxPool layers

@yf225 I’m interested in all of the Vision layers

I‘d be interested in contributing to the non-linear activations.

I can get started with the distance functions cosine similarity and pairwise distance function

@yf225 @ptrblck I’m interested to contribute. Excited of the C++ frontend API. Other than obviously this example, do you have any other examples?

1 Like

@yf225 I’m interested in the Convolution layers and non-linear activations (basic+other)

Thank you all so much for your interests and I really appreciated it. I am currently working on cutting out the task for each layer (most of them involve adding the corresponding torch::nn::functional layer as well), and I will share with all of you (along with example PRs to get started) as soon as possible.


@yf225 : Would be happy to attempt to contribute code to Convolutions,Vision and Non-linearities if all tasks aren’t already picked up

Great to see the work on C++ API, I am not sure whether I am able to contribute in this but I could give a try, but definitely I could contribute on the use case, in which I will integrate it with other OSS (such as Scilab) to make the call to C++ API easier for others, especially beginners who have less programming background.


i want to contribute to Non-linear activations or dataparallel layers if still in time

I can try and work with the Normalization layer implementations.

I would like to work on ReflectionPad1d, ReflectionPad2d and ReflectionPad3d.