I’m trying to build a VGG16 model to make an ONNX export using Pytorch. I want to force the model with my own set of weights and biases. But in this process my computer quickly runs out of memory.
Here is how I want to do it (this is only a test, in the real version I read the weights and biases in a set of files), this example only force all values to 0.5
# Create empty VGG16 model (random weights) from torchvision import models from torchsummary import summary vgg16 = models.vgg16() # la structure est : vgg16.__dict__ summary(vgg16, (3, 224, 224)) # convolutive layers for layer in vgg16.features: print() print(layer) if (hasattr(layer,'weight')): dim = layer.weight.shape print(dim) print(str(dim*(dim*dim*dim+1))+' params') # Remplacement des poids et biais for i in range (dim): layer.bias[i] = 0.5 for j in range (dim): for k in range (dim): for l in range (dim): layer.weight[i][j][k][l] = 0.5 # Dense layers for layer in vgg16.classifier: print() print(layer) if (hasattr(layer,'weight')): dim = layer.weight.shape print(str(dim)+' --> '+str(dim*(dim+1))+' params') for i in range(dim): layer.bias[i] = 0.5 for j in range(dim): layer.weight[i][j] = 0.5
When I look at the memory usage of the computer, it grows linealrly and saturates the 16GB RAM during the first dense layer processing. Then python crashes…
Is there another better way to do this, keeping in mind that I want to onnx export the model afterwards?
Thanks for your help.