I’m having a lot of issues implementing this paper, Generative Modeling for Protein Structures. I’m using the DCGAN tutorial as a reference since their architectures are similar.
Context:
The dataloader in the tutorial is implemented as:
dataset = dset.ImageFolder(...)
for i, data in enumerate(dataloader, 0):
real_cpu = data[0].to(device)
and gives shapes:
print(data[0].shape)
print(data[1].shape)
torch.Size([128, 3, 64, 64])
torch.Size([128])
Whereas my dataset is implemented as:
with h5py.File('/home/collin/protein_maps/dataset.hdf5', 'r') as f:
x = f['train_64'][:]
dataloader = torch.utils.data.DataLoader(x, ...)
for i, data in enumerate(dataloader, 0):
# Unsqueezed dim one to convert [128, 64, 64] to [128, 1, 64, 64] to conform to D architecture
real_cpu = (data.unsqueeze(dim=1).type(torch.FloatTensor)).to(device)
This causes problems later on when I want to enforce symmetry later when passing from the generator to the discriminator as specified in the original paper:
During training, we enforce that G(z) be positive by clamping output values above zero and symmetric by setting G(z) = (G(z)+G(z).T)/2 before passing the generated map to the discriminator.
but I get a broadcasting error when trying to broadcast [128, 1, 64, 64] and [64, 64, 1, 128] which makes sense. The authors specify that this step be done in the generator architecture verbatim:
Model architectures. Each layer is presented as:
Layer(filters, kernel size, stride, padding)
------------------64 GAN------------------
down-scale factor = 100
--Generator--
nz = 100
ConvTranspose2d( 512, 4, 1, 0)
BatchNorm2d(512)
LeakyReLU(0.2),
ConvTranspose2d(256, 4, 2, 1)
BatchNorm2d(256)
LeakyReLU(0.2)
ConvTranspose2d(128, 4, 2, 1)
BatchNorm2d(128)
LeakyReLU(0.2)
ConvTranspose2d(64, 4, 2, 1)
BatchNorm2d(64)
LeakyReLU(0.2)
ConvTranspose2d(1, 4, 2, 1)
Clamp(>0)
Enforce Symmetry
but I’m not sure how to do this in practice as my architecture looks like:
self.main = nn.Sequential(
nn.ConvTranspose2d(nz, 512, kernel_size=4, stride=1, padding=0),
nn.BatchNorm2d(512),
nn.LeakyReLU(0.2),
nn.ConvTranspose2d(512, 256, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(256),
nn.LeakyReLU(0.2),
nn.ConvTranspose2d(256, 128, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(128),
nn.LeakyReLU(0.2),
nn.ConvTranspose2d(128, 64, kernel_size=4, stride=2, padding=1),
nn.BatchNorm2d(64),
nn.ConvTranspose2d(64, 1, kernel_size=4, stride=2, padding=1),
)
Question
How can I clamp and enfore symmetry in this part of my architecture? Is this possible or do I have to do this while training? If I do it in training what’s the best way to slice only the [64, 64] in [128, 1, 64, 64] and reinsert it after? Is there a function to enforce symmetry in pytorch in nn Sequential?