Increasing efficiency of implementation of a math equation

Given input symmetric matrix A with zero diagonals, I want to compute matrix C as below. Can I increase the efficiency by removing loops?

Also, would the following break backpropagation?

W = torch.tensor([[0,1,0,0,0,0,0,0,0],
              [1,0,1,0,0,1,0,0,0],
              [0,1,0,3,0,0,0,0,0],
              [0,0,3,0,1,0,0,0,0],
              [0,0,0,1,0,1,1,0,0],
              [0,1,0,0,1,0,0,0,0],
              [0,0,0,0,1,0,0,1,0],
              [0,0,0,0,0,0,1,0,1],
              [0,0,0,0,0,0,0,1,0]])

n = len(W)
C = torch.empty(n, n)
I = torch.eye(n)
for i in range(n):
    for j in range(n):
        B = W.clone()
        B[i, j] = 0
        B[j, i] = 0

        tmp = torch.inverse(n * I - B)

        C[i, j] = tmp[i, j]