RuntimeError: size mismatch, m1: [96 x 16384], m2: [1024 x 512]

i am using Variational autoencoder, feature and pixel discriminator.
batch size is 96.

model:
  vae:
    encoder: [['conv', 64,4,2,1,'bn','LeakyReLU'],
              ['conv', 128,4,2,1,'bn','LeakyReLU'],
              ['conv', 256,4,2,1,'bn','LeakyReLU'],
              ['conv', 512,4,2,1,'bn','LeakyReLU'],
              ['conv', 1024,4,2,1,'bn','LeakyReLU'],
              ['conv', 1024,4,2,1,  '','']
             ]
    code_dim: 3

    decoder: [['conv', 1024,4,2,1,'bn','LeakyReLU',True],
              ['conv', 512,4,2,1,'bn','LeakyReLU',False],
              ['conv', 256,4,2,1,'bn','LeakyReLU',False],
              ['conv', 128,4,2,1,'bn','LeakyReLU',False],
              ['conv', 64,4,2,1,'bn','LeakyReLU',False],
              ['conv',  3,4,2,1,  '','Tanh',False]
             ]
    lr: 0.0001
    betas: [0.5,0.999]

 D_feat:
    dnn: [['fc', 512, '', 'LeakyReLU',0],
          ['fc', 256, '', 'LeakyReLU',0],
          ['fc', 128, '', 'LeakyReLU',0],
          ['fc', 64, '', 'LeakyReLU',0],
          ['fc', 3, '', '', 0]
         ]
    lr: 0.0001
    betas: [0.5,0.999]

 D_pix:
    dnn: [['conv', 16, 4,2,1,'','LeakyReLU'],
          ['conv', 32,4,2,1,'','LeakyReLU'],
          ['conv', 64,4,2,1,'','LeakyReLU'],
          ['conv', 128,4,2,1,'','LeakyReLU'],
          ['conv', 256,4,2,1,'','LeakyReLU'],
          ['fc', 512, '', 'LeakyReLU',0],
          ['fc', [1,3], '', '',0]
         ]
    lr: 0.0001
    betas: [0.5,0.999]

This architecture is designed for image size 64, now i want to work with this architecture having image size 256, so what changes i need to do in its parameters?

You would have to increase the input features of the first linear layer, since the spatial size of the incoming activation would be larger, if you haven’t used adaptive pooling layers.
This would also explain the error message, which points towards a mismatch of the shapes in a linear layer.

I’m not sure, what kind of definitions you are using for the layers and cannot see where the input and output features are defined.

import torch
import torch.nn as nn
from torch.autograd import Variable

def get_act(name):
    if name == 'LeakyReLU':
        return nn.LeakyReLU(0.2)
    elif name == 'ReLU':
        return nn.ReLU()
    elif name == 'Tanh':
        return nn.Tanh()
    elif name == '':
        return None
    else:
        raise NameError('Unknown activation:'+name)

def LoadModel(name,parameter,img_size,input_dim):
    if name == 'vae':
        code_dim = parameter['code_dim']
        enc_list = []

        for layer,para in enumerate(parameter['encoder']):
            if para[0] == 'conv':
                if layer==0:
                    init_dim = input_dim
                next_dim,kernel_size,stride,pad,bn,act = para[1:7]
                act = get_act(act)
                enc_list.append((para[0],(init_dim, next_dim,kernel_size,stride,pad,bn,act)))
                init_dim = next_dim
            else:
                raise NameError('Unknown encoder layer type:'+para[0])

        dec_list = []
        for layer,para in enumerate(parameter['decoder']):
            if para[0] == 'conv':
                next_dim,kernel_size,stride,pad,bn,act,insert_code = para[1:8]
                act = get_act(act)
                dec_list.append((para[0],(init_dim, next_dim,kernel_size,stride,pad,bn,act),insert_code))
                init_dim = next_dim
            else:
                raise NameError('Unknown decoder layer type:'+para[0])
        return UFDN(enc_list,dec_list,code_dim)
    elif name == 'nn':
        dnet_list = []
        init_dim = input_dim
        for para in parameter['dnn']:
            if para[0] == 'fc':
                next_dim,bn,act,dropout = para[1:5]
                act = get_act(act)
                dnet_list.append((para[0],(init_dim, next_dim,bn,act,dropout)))
                init_dim = next_dim
            else:
                raise NameError('Unknown nn layer type:'+para[0])
        return Discriminator(dnet_list)
    elif name == 'cnn':
        dnet_list = []
        init_dim = input_dim
        cur_img_size = img_size
        reshaped = False
        for layer,para in enumerate(parameter['dnn']):
            if para[0] == 'conv':
                next_dim,kernel_size,stride,pad,bn,act = para[1:7]
                act = get_act(act)
                dnet_list.append((para[0],(init_dim, next_dim,kernel_size,stride,pad,bn,act)))
                init_dim = next_dim
                cur_img_size /= 2
            elif para[0] == 'fc':
                if not reshaped:
                    init_dim = int(cur_img_size*cur_img_size*init_dim)
                    reshaped = True
                next_dim,bn,act,dropout = para[1:5]
                act = get_act(act)
                dnet_list.append((para[0],(init_dim, next_dim,bn,act,dropout)))
                init_dim = next_dim
            else:
                raise NameError('Unknown encoder layer type:'+para[0])
        return Discriminator(dnet_list)
    else:
        raise NameError('Unknown model type:'+name)


# custom weights initialization
def weights_init(m):
    classname = m.__class__.__name__
    if classname.find('Conv') != -1:
        m.weight.data.normal_(0.0, 0.02)
    elif classname.find('BatchNorm') != -1:
        m.weight.data.normal_(0.0, 0.02)
        m.bias.data.fill_(0)

# create a Convolution/Deconvolution block
def ConvBlock(c_in, c_out, k=4, s=2, p=1, norm='bn', activation=None, transpose=False, dropout=None):
    layers = []
    if transpose:
        layers.append(nn.ConvTranspose2d(c_in, c_out, kernel_size=k, stride=s, padding=p))
    else:
        layers.append(         nn.Conv2d(c_in, c_out, kernel_size=k, stride=s, padding=p))
    if dropout:
        layers.append(nn.Dropout2d(dropout))
    if norm == 'bn':
        layers.append(nn.BatchNorm2d(c_out))
    if activation is not None:
        layers.append(activation)
    return nn.Sequential(*layers)

# create a fully connected layer
def FC(c_in, c_out, norm='bn', activation=None, dropout=None):
    layers = []
    layers.append(nn.Linear(c_in,c_out))
    if dropout:
        if dropout>0:
            layers.append(nn.Dropout(dropout))
    if norm == 'bn':
        layers.append(nn.BatchNorm1d(c_out))
    if activation is not None:
        layers.append(activation)
    return nn.Sequential(*layers)

# UFDN model
# Reference : https://github.com/pytorch/examples/blob/master/vae/main.py
# list of layer should be a list with each element being (layer type,(layer parameter))
# fc should occur after/before any convblock if used in encoder/decoder
# e.g. ('conv',( input_dim, neurons, kernel size, stride, padding, normalization, activation))
#      ('fc'  ,( input_dim, neurons, normalization, activation))
class UFDN(nn.Module):
    def __init__(self, enc_list, dec_list, attr_dim):
        super(UFDN, self).__init__()

        ### Encoder
        self.enc_layers = []

        for l in range(len(enc_list)):
            self.enc_layers.append(enc_list[l][0])
            if enc_list[l][0] == 'conv':
                c_in,c_out,k,s,p,norm,act = enc_list[l][1]
                if l == len(enc_list) -1 :
                    setattr(self, 'enc_mu', ConvBlock(c_in,c_out,k,s,p,norm,act,transpose=False))
                    setattr(self, 'enc_logvar', ConvBlock(c_in,c_out,k,s,p,norm,act,transpose=False))
                else:
                    setattr(self, 'enc_'+str(l), ConvBlock(c_in,c_out,k,s,p,norm,act,transpose=False))
            elif enc_list[l][0] == 'fc':
                c_in,c_out,norm,act = enc_list[l][1]
                if l == len(enc_list) -1 :
                    setattr(self, 'enc_mu', FC(c_in,c_out,norm,act))
                    setattr(self, 'enc_logvar', FC(c_in,c_out,norm,act))
                else:
                    setattr(self, 'enc_'+str(l), FC(c_in,c_out,norm,act))
            else:
                raise ValueError('Unreconized layer type')

        ### Decoder
        self.dec_layers = []
        self.attr_dim = attr_dim

        for l in range(len(dec_list)):
            self.dec_layers.append((dec_list[l][0],dec_list[l][2]))
            if dec_list[l][0] == 'conv':
                c_in,c_out,k,s,p,norm,act = dec_list[l][1]
                if dec_list[l][2]: c_in += self.attr_dim
                setattr(self, 'dec_'+str(l), ConvBlock(c_in,c_out,k,s,p,norm,act,transpose=True))
            elif dec_list[l][0] == 'fc':
                c_in,c_out,norm,act = dec_list[l][1]
                if dec_list[l][2]: c_in += self.attr_dim
                setattr(self, 'dec_'+str(l), FC(c_in,c_out,norm,act))
            else:
                raise ValueError('Unreconized layer type')

        self.apply(weights_init)

    def encode(self, x):
        for l in range(len(self.enc_layers)-1):
            if (self.enc_layers[l] == 'fc')  and (len(x.size())>2):
                batch_size = x.size()[0]
                x = x.view(batch_size,-1)
            x = getattr(self, 'enc_'+str(l))(x)

        if (self.enc_layers[-1] == 'fc')  and (len(x.size())>2):
            batch_size = x.size()[0]
            x = x.view(batch_size,-1)
        mu = getattr(self, 'enc_mu')(x)
        logvar = getattr(self, 'enc_logvar')(x)

        return mu, logvar

    def reparameterize(self, mu, logvar):
        if self.training:
            std = logvar.mul(0.5).exp_()
            eps = Variable(std.data.new(std.size()).normal_())
            return eps.mul(std).add_(mu)
        else:
            return mu

    def decode(self, z, insert_attrs = None):
        for l in range(len(self.dec_layers)):
            if (self.dec_layers[l][0] != 'fc') and (len(z.size()) != 4):
                z = z.unsqueeze(-1).unsqueeze(-1)
            if (insert_attrs is not None) and (self.dec_layers[l][1]):
                if len(z.size()) == 2:
                    z = torch.cat([z,insert_attrs],dim=1)
                else:
                    H,W = z.size()[2], z.size()[3]
                    z = torch.cat([z,insert_attrs.unsqueeze(-1).unsqueeze(-1).repeat(1,1,H,W)],dim=1)
            z = getattr(self, 'dec_'+str(l))(z)
        return z

    def forward(self, x, insert_attrs = None, return_enc = False):
        batch_size = x.size()[0]
        mu, logvar = self.encode(x)
        if len(mu.size()) > 2:
            mu = mu.view(batch_size,-1)
            logvar = logvar.view(batch_size,-1)
        z = self.reparameterize(mu, logvar)
        if return_enc:
            return z
        else:
            return self.decode(z,insert_attrs), mu, logvar


class Discriminator(nn.Module):
    def __init__(self, layer_list):
        super(Discriminator, self).__init__()

        self.layer_list = []

        for l in range(len(layer_list)-1):
            self.layer_list.append(layer_list[l][0])
            if layer_list[l][0] == 'conv':
                c_in,c_out,k,s,p,norm,act = layer_list[l][1]
                setattr(self, 'layer_'+str(l), ConvBlock(c_in,c_out,k,s,p,norm,act,transpose=False))
            elif layer_list[l][0] == 'fc':
                c_in,c_out,norm,act,drop = layer_list[l][1]
                setattr(self, 'layer_'+str(l), FC(c_in,c_out,norm,act,drop))
            else:
                raise ValueError('Unreconized layer type')


        self.layer_list.append(layer_list[-1][0])
        c_in,c_out,norm,act,_ = layer_list[-1][1]
        if not isinstance(c_out, list):
            c_out = [c_out]
        self.output_dim = len(c_out)

        for idx,d in enumerate(c_out):
            setattr(self, 'layer_out_'+str(idx), FC(c_in,d,norm,act,0))

        self.apply(weights_init)

    def forward(self, x):
        for l in range(len(self.layer_list)-1):
            if (self.layer_list[l] == 'fc') and (len(x.size()) != 2):
                batch_size = x.size()[0]
                x = x.view(batch_size,-1)
            x = getattr(self, 'layer_'+str(l))(x)

        output = []
        for d in range(self.output_dim):
            output.append(getattr(self,'layer_out_'+str(d))(x))

        if self.output_dim == 1:
            return output[0]
        else:
            return tuple(output)

I would recommend to print the model architecture once it’s created and check which linear layer uses in_features=1024 and out_features=512.
Once you’ve isolated it, change in_features to 16384.

Sir this is my model architecture

Encoder
[Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), LeakyReLU(negative_slope=0.2)]
[Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), LeakyReLU(negative_slope=0.2)]
[Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), LeakyReLU(negative_slope=0.2)]
[Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), LeakyReLU(negative_slope=0.2)]
[Conv2d(512, 1024, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), LeakyReLU(negative_slope=0.2)]
[Conv2d(1024, 1024, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))]
[Conv2d(1024, 1024, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))]

Decoder
[ConvTranspose2d(1027, 1024, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), LeakyReLU(negative_slope=0.2)]
[ConvTranspose2d(1024, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), LeakyReLU(negative_slope=0.2)]
[ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), LeakyReLU(negative_slope=0.2)]
[ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), LeakyReLU(negative_slope=0.2)]
[ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), LeakyReLU(negative_slope=0.2)]
[ConvTranspose2d(64, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), Tanh()]

Feature Discriminator
[Linear(in_features=1024, out_features=512, bias=True), LeakyReLU(negative_slope=0.2)]
[Linear(in_features=512, out_features=256, bias=True), LeakyReLU(negative_slope=0.2)]
[Linear(in_features=256, out_features=128, bias=True), LeakyReLU(negative_slope=0.2)]
[Linear(in_features=128, out_features=64, bias=True), LeakyReLU(negative_slope=0.2)]
[Linear(in_features=64, out_features=3, bias=True)]

Pixel Discriminator
[Conv2d(3, 16, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), LeakyReLU(negative_slope=0.2)]
[Conv2d(16, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), LeakyReLU(negative_slope=0.2)]
[Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), LeakyReLU(negative_slope=0.2)]
[Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), LeakyReLU(negative_slope=0.2)]
[Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), LeakyReLU(negative_slope=0.2)]
[Linear(in_features=16384, out_features=512, bias=True), LeakyReLU(negative_slope=0.2)]
[Linear(in_features=512, out_features=1, bias=True)]
[Linear(in_features=512, out_features=3, bias=True)]

when i set the input features as 16384, the architecture is now:

Encoder
[Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), LeakyReLU(negative_slope=0.2)]
[Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), LeakyReLU(negative_slope=0.2)]
[Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), LeakyReLU(negative_slope=0.2)]
[Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), LeakyReLU(negative_slope=0.2)]
[Conv2d(512, 1024, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), LeakyReLU(negative_slope=0.2)]
[Conv2d(1024, 16384, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))]
[Conv2d(1024, 16384, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))]

Decoder
[ConvTranspose2d(16387, 1024, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), LeakyReLU(negative_slope=0.2)]
[ConvTranspose2d(1024, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), LeakyReLU(negative_slope=0.2)]
[ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), LeakyReLU(negative_slope=0.2)]
[ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), LeakyReLU(negative_slope=0.2)]
[ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), LeakyReLU(negative_slope=0.2)]
[ConvTranspose2d(64, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), Tanh()]

Feature Discriminator
[Linear(in_features=16384, out_features=512, bias=True), LeakyReLU(negative_slope=0.2)]
[Linear(in_features=512, out_features=256, bias=True), LeakyReLU(negative_slope=0.2)]
[Linear(in_features=256, out_features=128, bias=True), LeakyReLU(negative_slope=0.2)]
[Linear(in_features=128, out_features=64, bias=True), LeakyReLU(negative_slope=0.2)]
[Linear(in_features=64, out_features=3, bias=True)]

Pixel Discriminator
[Conv2d(3, 16, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), LeakyReLU(negative_slope=0.2)]
[Conv2d(16, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), LeakyReLU(negative_slope=0.2)]
[Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), LeakyReLU(negative_slope=0.2)]
[Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), LeakyReLU(negative_slope=0.2)]
[Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)), LeakyReLU(negative_slope=0.2)]
[Linear(in_features=16384, out_features=512, bias=True), LeakyReLU(negative_slope=0.2)]
[Linear(in_features=512, out_features=1, bias=True)]
[Linear(in_features=512, out_features=3, bias=True)]

and i get the below error:

RuntimeError: size mismatch, m1: [96 x 262144], m2: [16384 x 512] at /opt/conda/conda-bld/pytorch_1556653145446/work/aten/src/THC/generic/THCTensorMathBlas.cu:268

I assume the in_channels=16387 for the first ConvTranspose2d in Decoder are a typo?

Are you working with variable input shapes? The shape mismatch shows an incoming activations which gets larger.

Sir attribute dimension which is 3 is being added to the input feature in line 159 and 163.
image

All images are of size 256x256x3

This architecture is working fine for image size 64x64x3.
i want to work with images of 256x256x3 size, so that i could visualise the results, but getting the size mismatch error, i am unable to find where this error is coming from.

I would recommend to use the same workflow as before.
Once you see the shape mismatch, print the model, search for the layer which is creating the error, and increase its in_features to the needed value, which is given in the error message.