Hi, I’m trying to start pytorch.
but I saw that using gpu(cuda:0) 's performance was not fast as I expected.
So, for a test, I made a very simple script to check it,
No matter how small the network, there is a minimum computation time for optimizing network…
am I right? and if it is not, could you tell me how can I reduce this computation time?
scipt used for test are here : https://github.com/sizsJEon/pytorchTest/blob/master/torchAdamTest.py
I’m using pytorch 1.4 and rtx2060 with cuda.
I tested network with
(H1, H2) = (4, 2), (24, 12), (600, 300), (2000, 1000)
at (4,2)~(600,300), computation time was very similar.
modelTime: 0.17~0.19 ms
lossTime : 0.45~0.47 ms
optiTime : 0.52~0.55 ms