Uniformly scale down ResNet model size

Hi,

I am attempting to uniformly scale down the model parameters of my ResNet50 by shrinking the width of the network: decreasing the size of the in_channel and out_channel arguments of every Conv2D layer (excluding initial input) and the width of the Linear layer at the end by some prefactor (excluding the final network output). The model is obtained via torchvision:

self.model = torch.hub.load('pytorch/vision:v0.10.0', 'resnet50', pretrained=True)

I’m hoping there’s a way to do this without brute-force redefining all the layers within my network class. Any tips?