I want numpy define loss function. I need use np.linalg.det()

Signature of method ‘loss.forward()’ does not match signature of base method in class ‘Function’ less… (⌘F1)
Inspection info: This inspection detects inconsistencies in overriding method signatures.

class loss(Function):

def forward(x,INPUT):

    batch_size = x.shape[0]

    X = x.numpy()
    input = INPUT.numpy()
    Loss = 0
    for i in range(batch_size):
        R_r = input[i,0:4]
        R_i = input[i,4:8]
        H_r = input[i,8:12]
        H_i = input[i,12:16]
        R = np.concatenate((R_r, R_i), axis=1)
        H = np.concatenate((H_r, H_i), axis=1)

        T_r = input[i,16:32]
        T_i = input[i,32:48]
        temp_t1 = np.concatenate((T_r,T_i),axis=1)
        temp_t2 = np.concatenate((-T_i,T_r),axis=1)
        T = np.concatenate((temp_t1,temp_t2),axis=0)

        phi_r = np.diagonal(X[i,0:4])
        phi_i = 1 - np.power(phi_r,2)
        temp_phi1 = np.concatenate((phi_r,phi_i),axis=1)
        temp_phi2 = np.concatenate((-phi_i, phi_r), axis=1)
        phi = np.concatenate((temp_phi1,temp_phi2),axis=0)

        temp1 = np.matmul(R,phi)
        temp2 = np.matmul(temp1,T)
        H_hat = H + temp2

        Q_r = np.zeros((4,4))
        Q_r[np.triu_indices(4,1)] = X[i,4:10]
        Q_r = Q_r + Q_r.T
        row,col = np.diag_indices(4)
        Q_r[row,col] = X[i,10:14]
        Q_i = np.zeros((4,4))
        Q_i[np.triu_indices(4,1)] = X[i,14:20]
        Q_i = Q_i - Q_i.T

        temp_Q1 = np.concatenate((Q_r,Q_i),axis=1)
        temp_Q2 = np.concatenate((-Q_i,Q_r),axis=1)
        Q = np.concatenate((temp_Q1,temp_Q2),axis=0)

        H_hat_r = H_hat[0:4]
        H_hat_i = H_hat[4:8]

        temp_H1 = np.concatenate((-H_hat_i.T,H_hat_r.T),axis=0)
        H_hat_H = np.concatenate((H_hat.T,temp_H1),axis=1)
        temp_result1 = np.matmul(H_hat,Q)
        temp_result2 = np.matmul(temp_result1,H_hat_H)


        Loss += np.linalg.det(1+temp_result2)
    Loss = t.from_numpy(Loss / batch_size)
    return Loss

def backward(grad_output):
    return grad_output

Could you try to adapt your custom Function based on this tutorial?
I assume this error is raised, as your functions are defined in an unexpected way.

Thank you very much. I will try it.
My main question is that I do many array operations in numpy like concatenate.
Whether it would do harm to the backward.

As long as you define the backward manually, the numpy operations will work, as you are responsible to calculate the right gradients.

However, if you could swap all numpy methods to the PyTorch equivalents, Autograd will automatically create the backward pass for you.