Hey everyone,
I’m working on improving a model based on VGG19 Baseline Model with CIFAR-10 dataset and noticed that my modified version has significantly higher inference time and CPU usage. I was expecting some overhead due to the changes, but the difference is much larger than anticipated.
I’ve been troubleshooting for a while but haven’t been able to pinpoint the exact issue.
If anyone with experience in optimizing inference time and CPU efficiency could take a look, I’d really appreciate it!
My notebook link with the code and profiling results: