Hello everyone,

I am want to freeze some weights in Convolution network through setting some weights to zero in order to cutoff the effect of the weights on training network while gradient is doing backward and updating the weights. I mean the weights must be zero to be ignored during training.

For this purpose, I defined two functions that convert the model to vector and vice versa. The vector is the weights of model. In a section of my code I convert the model to vector, then I manipulate the vector (set some weights to zero) . Finally the vector is converted to model and the gradient is applied to train the model. This is the code:

vector = model2vector(model)

batch_data, batch_target = Variable(x.cuda()), Variable(target.cuda())

optimizer.zero_grad()

output=model(batch_data)

loss=criterion(output,batch_target)

loss.backward()

if epoch>0:

vector[1000:2000] = 0

model = vector2model(model,Vector)

optimizer.step()

After running these lines of code, the number of zero weights reduces. How can I solve this problems? I want to freeze the zero weights and stop the effect of gradient on them.