How to initialize weight/bias data to specific vector without making it a leaf variable?

Hello everyone!

This is my first post here.

I have been trying to initialize bias data to a specific vector but have not found a way to do so without creating a leaf variable. I am coding in C++.

In the layer “affine2”, I can initialize the entries as follows:

affine2->bias[0] = 0.1;
affine2->bias[1] = 0.2;
affine2->bias[2] = 0.3;

However, later on, when trying to do backpropagation , this leads to the error “leaf variable has been moved into the graph interior”.

Does anyone know how to do exactly what I displayed without introducing leaf variables?

Thank you for your time.


You want to make changes that are not tracked by the autograd here.
So you should wrap these into a NoGradGuard.

1 Like

Thank you very much, albanD! It worked! :smile:

In case anyone comes across this post in the future, this is how albanD’s solution ended up looking as for me:

   torch::NoGradGuard no_grad;
   affine2->bias[0] = 0.5;
   affine2->bias[1] = 1.0;
   affine2->bias[2] = 0.5;
1 Like