Reversing a linear layer

The goal is to reverse a trained linear layer that goes from 128->64.

The network:

class embedding(nn.Module):

    def __init__(self):
        super(embedding, self).__init__()
        self.fc1 = nn.Linear(128, 64)

    def forward(self, x):
        x = self.fc1(x)
        return x

The following is the code for the solution:

# Z is the vector that needs to be reversed -> Nx64
model = embedding() # Assume that this is pre-trained
step = list(model.modules())[-1]
if isinstance(step, torch.nn.Linear):
    s1 = (z-step.bias).unsqueeze(2) #Nx64x1
    w = step.weight # 64x128 -> m<n -> Right Inverse -> w^-1 = w^T (ww^T)^-1
    s2 = torch.matmul(w.transpose(0,1), torch.matmul(w, w.transpose(0,1)).inverse()) #128x 64 
    z = torch.matmul(s2, s1).squeeze()
    a3 = model(z)

The problem is when I compute the l2 dist between z (original vector) and a3 (vector that was reproduced via inverting and then forward proping), I get a large distance. (That is torch.dist(z, a3,2) is approx 1.5).

This indicates that I am doing something wrong in the inversion but I couldn’t find the cause.