Hi,
There are many real-time networks such as MobileNet family, ShuffleNet family, ENet, ERFNet, EDANet and so on. And in these papers, there is a metrics to test prediction speed (maybe this statement is not accuracy) called FPS, what I want to do is to reproduce them and to know how to calculate FPS.
In my opinion, the first method is to feed a random tensor ( which has the same shape with input image) to the network and calculate average inference time.
So the code snippet I tried before as follows:
def speed_testing(self):
# cuDnn configurations
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.deterministic = True
name = self.model.name
print(" + {} Speed testing... ...".format(name))
model = self.model.to('cuda:{}'.format(self.config.device_id))
random_input = torch.randn(1,3,self.config.input_size, self.config.input_size).to('cuda:{}'.format(self.config.device_id))
model.eval()
time_list = []
for i in tqdm(range(10001)):
torch.cuda.synchronize()
tic = time.time()
model(random_input)
torch.cuda.synchronize()
# the first iteration time cost much higher, so exclude the first iteration
#print(time.time()-tic)
time_list.append(time.time()-tic)
time_list = time_list[1:]
print(" + Done 10000 iterations inference !")
print(" + Total time cost: {}s".format(sum(time_list)))
print(" + Average time cost: {}s".format(sum(time_list)/10000))
print(" + Frame Per Second: {:.2f}".format(1/(sum(time_list)/10000)))
I found that cudnn configuration can effect the inference time and set them both True
, and the first iteration cost too much time than others, so I exclude it.
But when I run the same network, again and again, its inference speed is gradually growing. From 112fps --> 118fps --> 126fps, but sometimes fall to the bottom 100~fps.
My questions are:
- Should I pass the real image to the network instead of a random tensor?
- Does the cudnn configuration is right for the best environment to test networks inference time?
- How can I get stable average inference time for each network?
Thanks in advance, any idea would be appreciated.