How can I convert the result from a Conv layer to a vector

Hi,
I am trying to implement a CNN where my output must be only one element. Si I am doing sth like this (which I took from the GAN example):

class Discriminator(nn.Module):
    def __init__(self, nc,ndf):
        super(Discriminator, self).__init__()
        self.features = nn.Sequential(
            # input is (nc) x 64 x 64
            nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
            nn.LeakyReLU(0.2, inplace=True),
            # state size. (ndf) x 32 x 32
            nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
            nn.BatchNorm2d(ndf * 2),
            nn.LeakyReLU(0.2, inplace=True),
            # state size. (ndf*2) x 16 x 16
            nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
            nn.BatchNorm2d(ndf * 4),
            nn.LeakyReLU(0.2, inplace=True),
            # state size. (ndf*4) x 8 x 8
            nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
            nn.BatchNorm2d(ndf * 8),
            nn.LeakyReLU(0.2, inplace=True)
            # state size. (ndf*8) x 4 x 4
            #nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),
            #nn.Linear(ndf * 8,1),
            #nn.Sigmoid()
        )
        self.classifier= nn.Sequential(
            nn.Linear(self.inpelts,1),
            nn.Sigmoid()
            )
    def forward(self, input):
        x=self.features(input)        
        x = x.view(x.size(0), -1)#B,chans*H*W
        self.inpelts=x.size(1)
        print 'x shape ',x.size()
        output=self.classifier(x)
        return output

However, in order to make it work, I need to define in advance the number of elements going to the Linear (fully connected) layer.
Is there a way to build it automatically, like in Caffe or Tensorflow where it detects the number of elements by itself?
Thanks!

There is no way to detect this automatically with nn.Linear at the moment.
If you use the functional interface: http://pytorch.org/docs/nn.html#torch.nn.functional.linear then you can initialize the weights in the constructor to some size:

linear_weight = nn.Parameter(torch.randn(1, 1), requires_grad=True) # resize this in the forward function

And in the forward function:

# some stuff before, where x is the input to linear
if self.linear_weight.data.nelement() == 1:
   self.linear.weight.data.resize_(1, x.size(1)) # output_features x input_features)
   # initialize weights randomly
   self.linear.weight.data.normal_(-0.1, 0.1)
x = nn.functional.linear(x, self.linear_weight)