The training speed of between GPU and CPU is same?

When I using Pytorch for conditional gan , I found that the training speed had little change no matter using GPU nor CPU.

There are some mistake in my codes?

there might not be mistakes in your code.
If you run a very small model then it will have same speed on CPU and GPU.

If you run large model, you might see speedup.
You can look at our examples for GAN code which is fast: https://github.com/pytorch/examples/

1 Like

Are you putting the net/model in cuda? Unlike Tensorflow, here you have to specify that you want to use GPU.

For small models there isn’t much difference, but for big models there difference is quite big.

@Ismail_Elezi,Thanks for your suggestion. I use DCGAN framework(5 conv layers for D net and 5 deconv layers for G net). For every variables I use a=a.cuda(), this setting right?

Yes, you should send all the variables in cuda, but in addition you also need to send both the neural networks in cuda, in addition to the criterior (which contains the cost function).

Something like:

if opt.cuda:
    discriminator.cuda()
    generator.cuda()
    criterion.cuda()
    input, label = input.cuda(), label.cuda()

might do the trick.

2 Likes