This question is more like basic python Class question I think.(If this question is not belong here, plz let me know)
I am trying to build a simple conditional GAN model.
I created my own custom Dataset class and collate_fn which is fed to DataLoader class.
I customized those since input data has varying shape.
Thankfully, I followed this solution using rnn.pad_sequence function in order to padding within each batches.(thx to @jdhao and @pinocchio)
n_epochs = 100
for epoch in range(n_epochs):
for batch_idx, (data_input, labels) in enumerate(data_loader):
print(data_input.size())
Above code gives me the output looks like the following one.
torch.Size([10, 86, 408])
torch.Size([10, 105, 408])
torch.Size([10, 110, 408])
torch.Size([10, 87, 408])
torch.Size([10, 76, 408])
torch.Size([10, 44, 408])
torch.Size([10, 54, 408])
torch.Size([10, 52, 408])
torch.Size([10, 89, 408])
torch.Size([10, 59, 408])
torch.Size([10, 60, 408])
torch.Size([10, 43, 408])
So you can see that first one is the batch length, but the second varying number represents zero-padded input.
The problem is that I need to get these second number inside Generator class.
class Generator(nn.Module):
def __init__(self):
super(Generator, self).__init__()
self.label_emb = nn.Embedding(n_class, n_class)
self.model = nn.Sequential(
nn.Linear(latent_dim + n_class, 256),
nn.LeakyReLU(0.2, inplace= True),
nn.Linear(256, 512),
nn.LeakyReLU(0.2, inplace= True),
nn.Linear(512, 1024),
nn.LeakyReLU(0.2, inplace= True),
nn.Linear(1024, int(dt_h * dt_w)),
nn.Sigmoid()
)
def forward(self, z, labels):
gen_input = torch.cat((self.label_emb(labels), z), -1)
tech = self.model(gen_input)
tech = tech.view(batch_size, dt_h, dt_w)
return tech
The dt_h in the code is the one that should be replaced by the varying number.
Any advice will be appriciated. Thanks!