Autoencoder reconstruction metric

I am building a Variational Autoencoder, and I am looking for a metric to compare the input with the reconstruction.

My input is a sparse matrix (not an image).

my model looks like:

class VAE(nn.Module):
    def __init__(self, inputDim, hiddenDim, zdim):
        super(VAE, self).__init__()

        self.dropout = nn.Dropout(0.4)

        self.inputDim = inputDim
        self.hiddenDim = hiddenDim
        self.zdim = zdim

        self.hiddenEnc1   = nn.Linear(self.inputDim, self.hiddenDim) #1
        self.hiddenEncBN1 = nn.BatchNorm1d(num_features=self.hiddenDim)
        self.mu2  = nn.Linear(self.hiddenDim, self.zdim)
        self.var2 = nn.Linear(self.hiddenDim, self.zdim)
        self.hiddenDec1   = nn.Linear(self.zdim, self.hiddenDim) #1
        self.hiddenDecBN1 = nn.BatchNorm1d(num_features=self.hiddenDim)
        self.hiddenDecL3 = nn.Linear(self.hiddenDim, self.inputDim) #3

    def encode(self, x):
        h1 = F.relu(self.hiddenEncBN1(self.dropout(self.hiddenEnc1(x))))
        return self.mu2(h1), self.var2(h1)

    def reparameterize(self, mu, logvar):
        std = torch.exp(0.5 * logvar)
        eps = torch.randn_like(std)
        return mu + eps * std

    def decode(self, z):
        h3 = F.relu(self.hiddenDecBN1(self.hiddenDec1(z)))
        return torch.sigmoid(self.hiddenDecL3(h3))

    def forward(self, x):
        mu, logvar = self.encode(x)
        z = self.reparameterize(mu, logvar)
        return self.decode(z), mu, logvar, z

In order to compare the input with the reconstructed, I basically measured the Pearson correlation for each data and plotted a Pearson distribution. The result is not good. My main concern is that is this even a good metric? given that I start with a sparse matrix, and end up with a sigmoid reconstructed matrix? What would be the best way of comparing the two matrices?

The result is better when I compare the reconstruction to the sigmoid of the input. But I wanted to see if I should change the model, activation, etc.?