Computing Hardware Resources for deep learning model

Hi everyone
Can anyone guide me how can we compute resources required by our model before deploying it. For instance
if we want to apply ‘A’ neural network (with ‘N’ layers) to a ‘D’ dataset. How can we compute how hardware resources we should have to run it perfectly. Instead of deploying on too smaller or too larger hardware that is wastage of time and cost.

Please help if anyone have any clue. I have been googling a lot about it but couldn’t find a perfect solution.

Thanks in Advance