Dropout2d is too slow

I am running the example at https://github.com/pytorch/examples/blob/master/cpp/mnist/mnist.cpp and the training is very slow.
After debugging I found that the Dropout2d layer ( conv2_drop->forward(x) ) is taking around 20 seconds with CUDA and an RTX 2060. I don’t know why it is taking so long, I’d appreciate if you could help!
Here’s my env:

PyTorch version: 1.8.1+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: version 3.18.2

Python version: 3.8 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: 11.2.67
GPU models and configuration: GPU 0: GeForce RTX 2060
Nvidia driver version: 460.27.04
cuDNN version: Probably one of the following:
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A

Versions of relevant libraries:
[pip3] numpy==1.19.2
[pip3] numpydoc==1.1.0
[pip3] torch==1.8.1
[conda] blas                      1.0                         mkl  
[conda] magma-cuda110             2.5.2                         1    pytorch
[conda] magma-cuda112             2.5.2                         1    pytorch
[conda] mkl                       2020.2                      256  
[conda] mkl-include               2020.2                      256  
[conda] mkl-service               2.3.0            py38he904b0f_0  
[conda] mkl_fft                   1.2.0            py38h23d657b_0  
[conda] mkl_random                1.1.1            py38h0573a6f_0  
[conda] numpy                     1.19.2           py38h54aff64_0  
[conda] numpy-base                1.19.2           py38hfa32c7d_0  
[conda] numpydoc                  1.1.0              pyhd3eb1b0_1  
[conda] torch                     1.8.1                    pypi_0    pypi

How did you profile the code?
Note that CUDA operations are executed asynchronously, so I assume you’ve properly synchronized the code and isolated the bottleneck to the dropout layer or have used other profiling tools such as nsys?