Hi Guys,
I’ve some difficulties grasping the concept; that’s why I need your feedback!
I should map a random noise of shape (100, 1, 1) to an image tensor of shape (3, 64, 64).
The generator to implement consists of 5 transposed convolutional layers:
To use is 512, 256, 128, 64, and 3 output channels for these layers.
Use a kernel size of 4.
Use a padding of 0 in the first transposed convolution and a padding of 1 for
all subsequent transposed convolutions.
Do not use a bias.
Use a stride of 1 for the first transposed convolution and a stride of 2 for all subsequent transposed convolutions.
Apply batch normalization after each transposed convolutional layer except for the last one.
Apply a ReLU activation function after each batch normalization layer. Use the torch.nn.ReLU
function for this.
Apply a hyperbolic tangent function after the last transposed convolutional layer.
Return the output after this function.
My Implementation:
class Generator(nn.Module):
def init(self):
super(Generator, self).init()
self.main = nn.Sequential(
#-----------------------------------------------
# 1.Layer
#-----------------------------------------------
nn.ConvTranspose2d(in_channels=100,out_channels=512,
kernel_size=(4, 4),stride=1,padding=0),
nn.BatchNorm2d(512),
nn.ReLU(inplace = True),
#-----------------------------------------------
# 2.Layer
#-----------------------------------------------
nn.ConvTranspose2d(in_channels=512,out_channels=256,
kernel_size=(4, 4),stride=2, padding=1),
nn.BatchNorm2d(256),
nn.ReLU(inplace = True),
#-----------------------------------------------
#. 3.Layer
#-----------------------------------------------
nn.ConvTranspose2d(in_channels=256,out_channels=128,
kernel_size=(4, 4),stride=2,padding=1),
nn.BatchNorm2d(128),
nn.ReLU(inplace = True),
#-----------------------------------------------
# 4.Layer
#-----------------------------------------------
nn.ConvTranspose2d(in_channels=128,out_channels=64,
kernel_size=(4, 4),stride=2,padding=1),
nn.BatchNorm2d(64),
nn.ReLU(inplace = True),
#-----------------------------------------------
# 5.Layer
#-----------------------------------------------
nn.ConvTranspose2d(in_channels=64,out_channels=3*64*64,
kernel_size=(4, 4),stride=2,padding=1),
nn.Tanh()
)
def forward(self, data):
return self.main(data)
I want to know if the implementation is correct. Thanks