Hi friends in Pytorch community,
I want to find out the inference time of each (conv, bn, relu) layer in a pytorch model (like resnet101 implemented in torchvision package), is there any way to do this? I guess maybe register_buffer is suitable for this task, but I don’t know how to figure it out. Does anyone has any idea? Thanks in advance
Hi,
register_buffer
is used to save buffers that you then use during the forward pass.
I would recommend using the autograd profiler to measure the runtime.
2 Likes
Hi,
Thanks for you reply, I will check docs of autograd profiler.