Estimating size of model without loading in GPU


I’m writing a scaffold which will allow launching PyTorch jobs across machines with an easy GUI. For this, I want to know the amount of a memory that will be needed to train a model before starting training.

An approximation should be: size of model + size of loaded batch + some extra space for temporary IO/calculated variables.

Given a pytorch model, what would be a way to calculate the size it will have in memory?