Convert a pytorch generator to tensorflow

Hi guys, i’m trying for a group project to convert this code into tensorflow but it didn’t go as planned, and while the net works good on pytorch it takes a very long time to train on tf, and so we thought some of you could help us. This is the model i want to convert:

class generator_model(nn.Module):
    def __init__(self):
        super(Generator, self).__init__()
        self.main = nn.Sequential( 
            nn.ConvTranspose2d(100, 160 * 16, 4, bias=False),
            nn.BatchNorm2d(160* 16),
            nn.ReLU(True),
            nn.ConvTranspose2d(160* 16, 160* 8, 4, 2, 1, bias=False),
            nn.BatchNorm2d(160* 8),
            nn.ReLU(True),
            nn.ConvTranspose2d(160* 8, 160* 4, 4, 2, 1, bias=False),
            nn.BatchNorm2d(160* 4),
            nn.ReLU(True),
            nn.ConvTranspose2d(160* 4, 160* 2, 4, 2, 1, bias=False),
            nn.BatchNorm2d(160* 2),
            nn.ReLU(True),
            nn.ConvTranspose2d(160* 2, 160, 4, 2, 1, bias=False),
            nn.BatchNorm2d(160),
            nn.ReLU(True),
            nn.ConvTranspose2d(160, 3, 4, 2, 1, bias=False),
            nn.Tanh()
        )

    def forward(self, input):
        return self.main(input)

And we can’t get why the first layer of pytorch conv2transpose usually is changed by a dense layer in tf on similar implementations.

def make_generator_model():
    model = tf.keras.Sequential()
    model.add(layers.Dense(160*16, use_bias=False, input_shape=(100,)))
    model.add(layers.Reshape((4, 4, 160)))
    model.add(layers.Conv2DTranspose(160*16, (4, 4), strides=(2, 2), padding='same', use_bias=False))
    model.add(layers.BatchNormalization())
    model.add(layers.ReLU())
    model.add(layers.Conv2DTranspose(160*8, (4, 4), strides=(2, 2), padding='same', use_bias=False))
    model.add(layers.BatchNormalization())
    model.add(layers.ReLU())
    model.add(layers.Conv2DTranspose(160*4, (4, 4), strides=(2, 2), padding='same', use_bias=False))
    model.add(layers.BatchNormalization())
    model.add(layers.ReLU())
    model.add(layers.Conv2DTranspose(160*2, (4, 4), strides=(2, 2), padding='same', use_bias=False))
    model.add(layers.BatchNormalization())
    model.add(layers.ReLU())
    model.add(layers.Conv2DTranspose(3, (4, 4), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))
    return model

How much are different those two implementations? How can be used the ConvTranspose2d on tf with the same (16,100) input?

I think you might get a better and faster answer on StackOverflow for TensorFlow-specific questions, as there might be more TF users than here. :slight_smile: