1-D DCGAN Error

Hi all! I am trying to build a 1D DCGAN model but getting this error:

Expected 3-dimensional input for 3-dimensional weight [1024, 1, 4], but got 1-dimensional input of size [1] instead.

My training set is [262144,1]. I tried the unsqueeze methods. It did not work.
My generator and discriminator:


Not sure what is wrong. Thanks for any suggestions!

I don’t know where the error is exactly raised, but both models use a 1D (transposed) convolution, which expects an input in the shape [batch_size, channels, seq_len].
Based on your input it seems you are using [batch_size=262144, channels=1], which is missing the temporal dimension so you might need to unsqueeze it.

If you get stuck, please post a minimal, executable code snippet by wrapping it into three backticks ```, which would make debugging easier.

Thank you for your response. I see your help all over the blog posts and you are doing awesome work! For the sake of better understanding, I drew the model here. I am not totally sure the correctness of this as this is my first time doing 1d conv:

So, when I unsqueeze my input:

a11n=torch.unsqueeze(a11n, 1)

I get this tensor:

[262144,1,1]

Then I think I get errors in the discriminator part.
Here is the error I get:

ValueError: x and y can be no greater than 2-D, but have shapes (512,) and torch.Size([512, 1, 1])

I really did not understand this error. And also here is my code again:

batch_size = 512
nz = 100
ngf = 512
ndf = 512
num_epochs = 512
lr = 0.0002
beta1 = 0.5
ngpu = 1
dataloader = torch.utils.data.DataLoader(a11n, batch_size=batch_size,
shuffle=True, num_workers=workers)
device = torch.device(“cuda:0” if (torch.cuda.is_available() and ngpu > 0) else “cpu”)
real_batch = next(iter(dataloader))

custom weights initialization called on netG and netD

def weights_init(m):
classname = m.class.name
if classname.find(‘Conv’) != -1:
nn.init.normal_(m.weight.data, 0.0, 0.02)
elif classname.find(‘BatchNorm’) != -1:
nn.init.normal_(m.weight.data, 1.0, 0.02)
nn.init.constant_(m.bias.data, 0)

Generator Code

class Generator(nn.Module):
def init(self, ngpu):
super(Generator, self).init()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is (nz x 1 x 1)
nn.ConvTranspose1d(nz, ngf * 16, 4, 1, 0, bias=False),
nn.BatchNorm1d(ngf * 16),
nn.ReLU(True),
# state size. (ngf16) x 4 x 1
nn.ConvTranspose1d(ngf * 16, ngf * 8, 4, 2, 1, bias=False),
nn.BatchNorm1d(ngf * 8),
nn.ReLU(True),
# state size. (ngf
8) x 8 x 1
nn.ConvTranspose1d( ngf * 8, ngf * 4, 4, 4, 0, bias=False),
nn.BatchNorm1d(ngf * 4),
nn.ReLU(True),
# state size. (ngf4) x 32 x 1
nn.ConvTranspose1d(ngf * 4, ngf
2, 4, 4, 0, bias=False),
nn.BatchNorm1d(ngf2),
nn.ReLU(True),
# state size. (ngf
2) x 128 x 1
nn.ConvTranspose1d(ngf*2, ngf, 4, 4, 0, bias=False),
nn.Tanh()
# state size. (512) x 1 x 1
)
def forward(self, input):
return self.main(input)

Create the generator

netG = Generator(ngpu).to(device)

Apply the weights_init function to randomly initialize all weights

to mean=0, stdev=0.2.

netG.apply(weights_init)

Discriminator Code

class Discriminator(nn.Module):
def init(self, ngpu):
super(Discriminator, self).init()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is (1) x 1 x 512
nn.Conv1d(1, ndf2, 4, 4, 0, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf
2) x 1 x 128
nn.Conv1d(ndf2, ndf * 4, 4, 4, 0, bias=False),
nn.BatchNorm1d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf
4) x 1 x 32
nn.Conv1d(ndf * 4, ndf * 8, 4, 4, 0, bias=False),
nn.BatchNorm1d(ndf * 8),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf8) x 1 x 8
nn.Conv1d(ndf * 8, ndf * 16, 4, 2, 1, bias=False),
nn.BatchNorm1d(ndf * 16),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf
16) x 1 x 4
nn.Conv1d(ndf * 16, 1, 4, 1, 0, bias=False),
nn.Sigmoid()
# state size. 1 x 1 x 1
)
def forward(self, input):
return self.main(input)

If I don’t unsqueeze it but use the input as is

[262144,1]

Then I get this error:

RuntimeError: Expected 3-dimensional input for 3-dimensional weight [1024, 1, 4], but got 1-dimensional input of size [1] instead

I actually have [262144,1] tensor, how come it sees 1 dim instead, I don’t get it.

Thank you for your help again!

Additionally, I found the code in the training that causes the error:

real_cpu = data[0].to(device)

which the data[0] is only 1dim tensor but I get the error of 3-dim expected. However, this time, when I unsqueeze it in here:

def forward(self, input):
    input=torch.unsqueeze(input,2)
    return self.main(input)

This time, I get the error:

Dimension out of range (expected to be in the range of [-2, 1], but got 2)

It is kind of confusing. It expects 3dimension but it does not allow me to be 3 dimensions, I don’t know why.
Here is the full code below including discriminator and some steps of the training (sorry for the three response):

Discriminator Code

class Discriminator(nn.Module):
def init(self, ngpu):
super(Discriminator, self).init()
self.ngpu = ngpu
self.main = nn.Sequential(
# input is (1) x 1 x 512
nn.Conv1d(1, ndf2, 4, 4, 0, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf
2) x 1 x 128
nn.Conv1d(ndf2, ndf * 4, 4, 4, 0, bias=False),
nn.BatchNorm1d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf
4) x 1 x 32
nn.Conv1d(ndf * 4, ndf * 8, 4, 4, 0, bias=False),
nn.BatchNorm1d(ndf * 8),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf8) x 1 x 8
nn.Conv1d(ndf * 8, ndf * 16, 4, 2, 1, bias=False),
nn.BatchNorm1d(ndf * 16),
nn.LeakyReLU(0.2, inplace=True),
# state size. (ndf
16) x 1 x 4
nn.Conv1d(ndf * 16, 1, 4, 1, 0, bias=False),
nn.Sigmoid()
# state size. 1 x 1 x 1
)
def forward(self, input):
input=torch.unsqueeze(input,2)
return self.main(input)

Create the Discriminator

netD = Discriminator(ngpu).to(device)

Handle multi-gpu if desired

if (device.type == ‘cuda’) and (ngpu > 1):
netD = nn.DataParallel(netD, list(range(ngpu)))

Apply the weights_init function to randomly initialize all weights

to mean=0, stdev=0.2.

netD.apply(weights_init)

Initialize BCELoss function

criterion = nn.BCELoss()

Create batch of latent vectors that we will use to visualize

the progression of the generator

fixed_noise = torch.randn(512, nz, 1, device=device)

Establish convention for real and fake labels during training

real_label = 1
fake_label = 0

Setup Adam optimizers for both G and D

optimizerD = optim.Adam(netD.parameters(), lr=lr, betas=(beta1, 0.999))
optimizerG = optim.Adam(netG.parameters(), lr=lr, betas=(beta1, 0.999))

Training Loop

Lists to keep track of progress

img_list =
G_losses =
D_losses =
iters = 0
print(“Starting Training Loop…”)

For each epoch

for epoch in range(num_epochs):
# For each batch in the dataloader
for i, data in enumerate(dataloader, 0):
############################
# (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
############################
## Train with all-real batch
netD.zero_grad()
# Format batch
real_cpu = data[0].to(device)
b_size = real_cpu.size(0)
label = torch.full((b_size,), real_label, dtype=torch.float, device=device)
# Forward pass real batch through D
output = netD(real_cpu).view(-1)
.
.
.
.
.
.
.
.

OK. I fixed the error. I had to unsqueeze the tensor eventually but that was not the only problem. I actually tried the unsqueeze(input, 0) solution before but it still did not work because one of the thing in the train part was set it wrong which was this:

    real_cpu = data[0].to(device=device, dtype=torch.float)

This is set to [0]. After I removed the [0] and unsqueeze the batch, the model worked!

1 Like