How to create a new layer in pytorch

To create a new layer do we need to write the backward pass or autograd will take care of backward pass for given forward pass

      class CustomLayer(torch.nn.Module):
          def __init__(self, k,b):
              super(CustomLayer, self).__init__()
      
              self.latent_dim = k
              Q=torch.empty(k,b)
              R=torch.empty(k,b)
          def forward(self, A):
              Q=A+Q
              R=A+R
              return Q, R

for this i’m getting error
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [64]], which is output 0 of SelectBackward, is at version 65; expected version 64 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

How to interpret this error and how to write custom layer

If you are using PyTorch operations and tensors then Autograd will take care of the backward pass. On the other hand, if you are using a 3rd party library (such as numpy) or use non-differentiable operations you would need to implement a custom autograd.Function with the backward definition.

I don’t think this module causes the issue as no inplace operations are used there, so could you post a minimal, executable code snippet reproducing this error?