Size of training set

In the context of multilabel image classification, I am aware that the complexity of the model must match the size of the training set to not overfit. However, I don’t know what is the criteria.
Can someone give me an approximate number of samples depending on the standard models say resnetxxx, densenetxxx, xceptionxxx, or others. Also, it will surely depend on the number of the layers trained in the case of transfer learning.
I could use all the help I can get.
Thanks in advance

There is no real limit for the size of training data. It does depend on the number of layers, but as long as you have more samples(say in the range 10k-100k, I have trained with even 500-1k samples) with diverse augmentations, it should be okay.

Also it depends on your usecase. For some applications, you may not find enough samples in which case you have to go with more augmentations.