RuntimeError: shape '[128, 1, 1]' is invalid for input of size 1

I trained GAN and now i wanna test how it is gonna transfer style.So I have image torch.Size([1, 3, 256, 256]). I pass it to discriminator. Last conv layer is

nn.Conv2d(1024, 1, kernel_size=4, stride=1, padding=0, bias=False),
    # out: 1 x 1 x 1

    nn.Flatten(),
    nn.Sigmoid())
lat = model['discriminator'](images)

lat shape is torch.Size([1, 1]). Generator should get

#latent_size = 128
nn.ConvTranspose2d(latent_size, 1024, kernel_size=4, stride=1, padding=0, bias=False),
    nn.BatchNorm2d(1024),
    nn.ReLU(True),

I tried to use torch.view() to increase number of feature maps to 128,but then I got error that is in title. So how I can pass lat to generator

new_img = model['generator'](lat)

I guess I should disable flatten and sigm for test and somehow increase number of feature maps to 128

view cannot increase the number of elements but is used in a similar way as reshape, but avoids a copy. If you want to increase the number of elemens, use repeat or expand.