# Finding mu and logvar

How to find mean and log variance from the encoder latent space?

I receive `x = self.enc5(x) # torch.Size([75, 16, 101, 101])` from encoder

and i would like to get `mu` and `logvar` from `x` to pass it to:

``````    def reparameterize(self, mu, log_var):
"""
:param mu: mean from the encoder's latent space
:param log_var: log variance from the encoder's latent space
"""
std = torch.exp(0.5 * log_var)  # standard deviation
eps = torch.randn_like(std)  # `randn_like` as we need the same size
sample = mu + (eps * std)  # sampling
return sample
``````

what is the right way to do so?

when I’m trying to do

``````        mu_logvar = x.view(x.shape, -1)
mu = self.l1(mu_logvar)
log_var = self.l2(mu_logvar)
z = self.reparameterize(mu, log_var)
``````

where

``````        self.l1 = nn.Linear(163216,500)
self.l2 = nn.Linear(163216,500)
``````

I get:

``````RuntimeError: Expected 4-dimensional input for 4-dimensional weight [16, 128, 4, 4], but got 2-dimensional input of size [75, 500] instead
``````

on

``````x = F.relu(self.decoder1(z))
``````

The output of `reparameterize` seems to be a 2D tensor in the shape `[75, 500]`, while `self.decoder` expects a 4D input and is most likely using a (transposed) conv layer.
If that’s the case, `unsqueeze` the spatial dimensions and it might work:

`````` x = F.relu(self.decoder1(z[:, :, None, None]))
``````