Customized linear operation in neural network

Hi everyone, I’m trying to replace the linear operation in neural network with my own version (i.e. the operation y = wx + b). By following, I created my own Linear class, and changed the forward function from F.linear(input, self.weight, self.bias) to customized_linear(input, self.weight, self.bias) that has been defined somewhere else.

The problem is how to define this customized linear function. I first tried to detach input and self.weight tensors, and converted them to numpy. After adding some linear operations on them, convert the result back to tensor form. However, in this way the loss function got updated incorrectly. My guess is the detach operation break all histories (correct me if I’m wrong).

Then I tried to play with the tensor directly without going to numpy. I tried some simple code

def customized_linear(input, weight):
    input_row = len(input[:,0])
    weight_row = len(weight[:,0])    
    test = torch.zeros(input_row, weight_row)
    for i in range(0,input_row):
        for j in range(0,weight_row):
            test[i,j] = torch.matmul(input[i,:], weight[j,:])
    return test

But it always stopped working, and reported Kernel died.

How should I implement my customized linear operation? Please help!! Thanks!

By “kernel died”, do you mean a Jupyter notebook died?

Hi, that was reported by Spyder from Anaconda.

Could you run it in a terminal and see, what error you’ll get?
The IPython kernel in Spyder often dies and is restarted immediately afterwards, which suppresses the error message.

I tried to run it in a terminal, the python simply stopped working after computing the 1st epoch. To be more specific, the python stopped working when it tried to compute the loss function for the 2nd epoch.

Could you try to create the custom function in a nn.Module like in the example and run it again?

I tried, it ran successfully. I also tried to replace LinearFunction.apply(input, self.weight, self.bias) in forward function with input.matmul(self.weight.t()) + self.bias, it also worked. I think there is still something wrong with my implementation for customized_linear function, but I cannot find the mistake.

Sorry, I just realized it seems I misunderstood your question. When you say “create function in a nn.Module”, do you mean to implement my function directly in this module (e.g. Linear subclass), and call it in forward function?