[Contributors Welcome] Add missing torch::nn layers in C++ API

Currently, PyTorch C++ API is missing many torch::nn layers that are available in the Python API. As part of the Python/C++ API parity project, we would like to add all of the following layers to C++:

Containers

  • [ ] ModuleDict
  • [ ] ParameterList
  • [ ] ParameterDict

Convolution layers

  • [ ] ConvTranspose1d
  • [ ] ConvTranspose2d
  • [ ] ConvTranspose3d
  • [ ] Unfold

Pooling layers

  • [ ] MaxUnpool1d
  • [ ] MaxUnpool2d
  • [ ] MaxUnpool3d
  • [ ] FractionalMaxPool2d
  • [ ] LPPool1d
  • [ ] LPPool2d
  • [ ] AdaptiveMaxPool1d
  • [ ] AdaptiveMaxPool2d
  • [ ] AdaptiveMaxPool3d
  • [ ] AdaptiveAvgPool1d
  • [ ] AdaptiveAvgPool2d
  • [ ] AdaptiveAvgPool3d

Padding layers

  • [ ] ReflectionPad1d
  • [ ] ReflectionPad2d
  • [ ] ReplicationPad1d
  • [ ] ReplicationPad2d
  • [ ] ReplicationPad3d
  • [ ] ZeroPad2d
  • [ ] ConstantPad1d
  • [ ] ConstantPad2d
  • [ ] ConstantPad3d

Non-linear activations (weighted sum, nonlinearity)

  • [ ] ELU
  • [ ] Hardshrink
  • [ ] Hardtanh
  • [ ] LeakyReLU
  • [ ] LogSigmoid
  • [ ] MultiheadAttention
  • [ ] PReLU
  • [ ] ReLU
  • [ ] ReLU6
  • [ ] RReLU
  • [ ] SELU
  • [ ] CELU
  • [ ] Sigmoid
  • [ ] Softplus
  • [ ] Softshrink
  • [ ] Softsign
  • [ ] Tanh
  • [ ] Tanhshrink
  • [ ] Threshold

Non-linear activations (other)

  • [ ] Softmin
  • [ ] Softmax
  • [ ] Softmax2d
  • [ ] LogSoftmax
  • [ ] AdaptiveLogSoftmaxWithLoss

Normalization layers

  • [ ] GroupNorm
  • [ ] SyncBatchNorm
  • [ ] InstanceNorm1d
  • [ ] InstanceNorm2d
  • [ ] InstanceNorm3d
  • [ ] LocalResponseNorm

Linear layers

  • [ ] Identity
  • [ ] Bilinear
  • [ ] Flatten

Dropout layers

  • [ ] AlphaDropout

Distance functions

  • [ ] CosineSimilarity
  • [ ] PairwiseDistance

Vision layers

  • [ ] PixelShuffle
  • [ ] Upsample
  • [ ] UpsamplingNearest2d
  • [ ] UpsamplingBilinear2d

DataParallel layers (multi-GPU, distributed)

  • [ ] DataParallel
  • [ ] DistributedDataParallel

Utilities

  • [ ] clip_grad_norm_
  • [ ] clip_grad_value_
  • [ ] parameters_to_vector
  • [ ] vector_to_parameters
  • [ ] weight_norm
  • [ ] remove_weight_norm
  • [ ] spectral_norm
  • [ ] remove_spectral_norm
  • [ ] PackedSequence
  • [ ] pack_padded_sequence
  • [ ] pad_packed_sequence
  • [ ] pad_sequence
  • [ ] pack_sequence

If you see any layers in the list that you are interested in using, please let me know whether you would also be interested in contributing to its implementation in C++ API. Most layers would just call the corresponding ATen functions internally (e.g. C++ MaxPoolNd module: https://github.com/pytorch/pytorch/pull/24860/files, thanks @ShahriarSS for the contribution! :smiley:) and doesn’t require in-depth knowledge about how the layer performs computation, so the barrier to entry should be considerably low for C++ developers of all levels.

Thanks!

11 Likes

Interested in clip_grad_norm_ and clip_grad_value_

1 Like

Linear || Containers

1 Like

I’d love to start with ConvTranspose 1D.

1 Like

@yf225 @ptrblck I would love to start with Non-linear activations: ReLU, Tanh, Sigmoid and maybe even MultiheadAttention :smiley:

1 Like

@yf225 Interested in Convolution or Vision layers.

1 Like

@yf225 I’m interested in implementing the mentioned Distance functions :slightly_smiling_face:.

1 Like

I can work on linear layers

Interested in AdaptiveMaxPool layers

@yf225 I’m interested in all of the Vision layers

I‘d be interested in contributing to the non-linear activations.

I can get started with the distance functions cosine similarity and pairwise distance function

@yf225 @ptrblck I’m interested to contribute. Excited of the C++ frontend API. Other than obviously this example, do you have any other examples?

1 Like

@yf225 I’m interested in the Convolution layers and non-linear activations (basic+other)

Thank you all so much for your interests and I really appreciated it. I am currently working on cutting out the task for each layer (most of them involve adding the corresponding torch::nn::functional layer as well), and I will share with all of you (along with example PRs to get started) as soon as possible.

7 Likes

@yf225 : Would be happy to attempt to contribute code to Convolutions,Vision and Non-linearities if all tasks aren’t already picked up

Great to see the work on C++ API, I am not sure whether I am able to contribute in this but I could give a try, but definitely I could contribute on the use case, in which I will integrate it with other OSS (such as Scilab) to make the call to C++ API easier for others, especially beginners who have less programming background.

Thanks.

i want to contribute to Non-linear activations or dataparallel layers if still in time

1 Like