To create a new layer do we need to write the backward pass or autograd will take care of backward pass for given forward pass
class CustomLayer(torch.nn.Module):
def __init__(self, k,b):
super(CustomLayer, self).__init__()
self.latent_dim = k
Q=torch.empty(k,b)
R=torch.empty(k,b)
def forward(self, A):
Q=A+Q
R=A+R
return Q, R
for this i’m getting error
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [64]], which is output 0 of SelectBackward, is at version 65; expected version 64 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
How to interpret this error and how to write custom layer