Why the forward time are different?

I set up a Net, and test the forward time with the random input:

test_data = torch.randn(1, 500, 3).cuda()
net = Net().cuda()
net.load_state_dict(torch.load(‘Net.pth’))
net.eval()
t1 = time.process_time()
output = net(test_data)
t2 = time.process_time()
print('forward time: ',t2-t1)

and get the forward time:

forward time: 0.0874945260000004

but when I use this model in real-world data in my task, I get a different time:

forward time: 0.3340185280000014

the real-world data has the same shape of my test data, why the forward time are different? and how to reduce the forward time in real-world task?

Hi,

If you’re running on gpu, you need to add a torch.cuda.syncrhonize() before every call to time.process_time() as the cuda api is asynchronous.

Hi, I try to add this, but the time doesn’t change much, the first is about 0.08 and the later is 0.3+

Are you running both experiements on the same computer? With the exact same code?
Does your real world data has some weird properties (like almost zero)?

Yes, all experiments are on the same computer with the same code, the real-world data is ordinary.

Could you send a small sample so that I can try and reproduce it here locally?

Sorry, this code belongs to our lab and is not allowed to be distributed without permission.
What causes the difference in forward time between the test and the real world task in general?

Well I can’t think of anything really :slight_smile:
It may be some discrepancy in your timing scripts?
It may be external factors?
It may be that they don’t actually have the same size?
It may be that your net does different things depending on the values (like a rnn where the length is controlled by the network itself)?

Alright, I’m going to examine the code and the data carefully. thanks!

1 Like