Hi everyone, I’m trying to replace the linear operation in neural network with my own version (i.e. the operation y = wx + b). By following https://pytorch.org/docs/stable/notes/extending.html, I created my own Linear class, and changed the forward function from F.linear(input, self.weight, self.bias) to customized_linear(input, self.weight, self.bias) that has been defined somewhere else.
The problem is how to define this customized linear function. I first tried to detach input and self.weight tensors, and converted them to numpy. After adding some linear operations on them, convert the result back to tensor form. However, in this way the loss function got updated incorrectly. My guess is the detach operation break all histories (correct me if I’m wrong).
Then I tried to play with the tensor directly without going to numpy. I tried some simple code
def customized_linear(input, weight):
input_row = len(input[:,0])
weight_row = len(weight[:,0])
test = torch.zeros(input_row, weight_row)
for i in range(0,input_row):
for j in range(0,weight_row):
test[i,j] = torch.matmul(input[i,:], weight[j,:])
test.requires_grad_(True)
return test
But it always stopped working, and reported Kernel died.
How should I implement my customized linear operation? Please help!! Thanks!