WhiteNoise Layer for DCGAN tutorial

Hi everyone,

I’m trying to implement one of the stability tricks for GAN using pytorch based on the DCGAN example.
I’ve used torch before and found a WhiteNoise Layer that gave me good results, but now I’d like to port
this to pytorch.

I am no expert in pytorch therefore I’m having problems defining the forward method and make it compatible
to the multi-gpu dcgan example.

Any hint would be welcome and I’m happy to make pull request as an added feature once done.

Link to Gist containing my attempt

Edit:
I’ve followed the “extending pytorch guide on the docs and update my code accordingly.”
There still seems to be an issue in the forward, due to error

“TypeError: forward() takes exactly 2 arguments (1 given)”

There are a number of problems with your example. One that is very important is that you’re nesting data parallel invocations (you have one in the forward function, but noise module is also wrapped in the DataParallel module.

Apart from that I really encourage you to revisit the tutorials, new module implementations should look differently than these from Lua Torch. There’s no need for having a noise layer, in the Forward of your generator you can just sample noise (Variable(torch.randn(size))) and forward that through the network. You might also want to read through the DCGAN example or code for the Wasserstein GAN paper.

Thank you @apaszke !

I believe you may have misunderstood the purpose of this module.
I am aware that for the generator you sample your latent space z (normal distribution) and forward that through.

What I would like to achieve here is additive noise on the inputs of the Discriminator,
that’s why I chose to create another module or layer in the Discriminator.

I will have another look at the tutorials, thank you in the meantime.

I’m replying to myself, in case anyone else runs into this issue.

The simpler way was to simply create a new noise variable and add it to the
real and fake images before calling:

input.data.resize_(real_cpu.size()).copy_(real_cpu)
additive_noise.data.resize_(real_cpu.size()).normal_(0, std)
input.data.add_(additive_noise.data)
output = netD(input)

2 Likes

You could also just sample white noise, wrap it in a Variable and add that. I think that would be a more elegant solution.

I think having a module for white noises would still be useful when we have a nn.Sequential module with multiple children modules and we want to add white noise in the middle of the nn.Sequential module.

Definitely, we can add white noises in forward function, but in that case, we should separate the nn.Sequential module into two modules so that we can put white noises in the middle. Right? I am not sure if there would be a more elegant solution.

1 Like

@supakjk Nothing stops you from doing that, but it’s not the recommended way. I find using layers elegant only for more complex functions, that have lots of parameters, like Conv - you have weight, bias, stride, padding, etc. For something as simple as adding noise, I’d rather add that to the forward function.

If you want to add the noise in the middle of a network, just use two sequentials.

2 Likes