Hi,
After calculating the FLOPS of the model (GAN), I found a strange point.
I compared following two cases:
1. Conv2d(kernel_size=3, stride=1, padding=1)
2. ConvTranspose2d(kernel_size=4, stride=2, padding=1)
The dimensions of the input channel and output channel are the same as 1024 and 512 respectively.
And my computation results are:
Conv2d
- # of parameters: 4.719 M
- GMac: 3.7
ConvTransposed2d
- # of parameters: 8.389 M
- GMac: 1.645
First, I would like to ask if the above results are valid.
If so, I’d like to ask why Conv2d’s FLOPS with a small number of parameters is larger than ConvTransposed2d.