Hello, I’m trying to implement a simple conditional GAN and wondering if what I’ve done is correct.
As far as I’ve understood, a conditional GAN is based on a simple architectural modification of the base GAN where we concatenate a suitable target vector of properties, or labels (so we end up performing a sort of semi-supervised training).
Currently, my model made by generator and discriminator looks like this:
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
from utils import data_generator, get_data_loaders
import pandas as pd
class Discriminator(nn.Module):
def __init__(self, in_features):
super().__init__()
self.disc = nn.Sequential(
nn.Linear(in_features, 128),
nn.LeakyReLU(0.01),
nn.Linear(128, 1),
nn.Sigmoid(),
)
def forward(self, x, y):
input_ = torch.cat([x,y],dim=1)
return self.disc(input_)
class Generator(nn.Module):
def __init__(self, z_dim, comp_dim):
super().__init__()
self.gen = nn.Sequential(
nn.Linear(z_dim, 256),
nn.LeakyReLU(0.01),
nn.Linear(256, comp_dim),
nn.Sigmoid(),
)
def forward(self, x,y):
input_ = torch.cat([x,y],dim=1)
return self.gen(input_)
Do you think that this looks correct?
I’m wondering because I’ve seen people constructing a nn.Embedding()
starting from the vector that we are trying to condition on, like in this tutorial I’m following. I don’t really understand why is this the case. In my situation, for example, I have a vector of target properties that I wouldn’t convert into an embedding.