How to define network dimensions for 2D array

Hi,

I am trying to make a GAN that takes in arrays of shape 25 x 25 with values (they can essentially be plotted as a 2D image) where the values correspond energy in a physics detector. They are not “color pixels” like some images put into convolutional nets.

I am wondering: how do I define the shape of the input and output layers? as 1d stringers of all 625 values per “image”, or can i do it as a 2d array?

Thanks for your help.

Michael

From the pytorch convention, it would be batch_size x 1 x 25 x 25, as expected by *2d layers. But it really depends on how you want to treat your data and what type of layers you want to use.

Thanks for your response. If I am using a convolutional GAN structure, would both the input noise dimension and the output dimension be batch_size x 1 x 25 x 25?

I was thinking along the lines of:

class _netG(nn.Module):
    def __init__(self, ngpu):
        super(_netG, self).__init__()
        self.ngpu = ngpu
        self.main = nn.Sequential(
            # input is Z, going into a convolution
            nn.ConvTranspose2d(     nz, ngf * 8, 4, 1, 0, bias=False),
            nn.BatchNorm2d(ngf * 8),
            nn.ReLU(True),
            # state size. (ngf*8) x 4 x 4
            nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
            nn.BatchNorm2d(ngf * 4),
            nn.ReLU(True),
            # state size. (ngf*4) x 8 x 8
            nn.ConvTranspose2d(ngf * 4, ngf * 2, 4, 2, 1, bias=False),
            nn.BatchNorm2d(ngf * 2),
            nn.ReLU(True),
            # state size. (ngf*2) x 16 x 16
            nn.ConvTranspose2d(ngf * 2,     ngf, 4, 2, 1, bias=False),
            nn.BatchNorm2d(ngf),
            nn.ReLU(True),
            # state size. (ngf) x 32 x 32
            nn.ConvTranspose2d(    ngf,      nc, 4, 2, 1, bias=False),
            nn.Tanh()
            # state size. (nc) x 64 x 64
        )

But updating the input values for my needs.