The problem I still need this functionality, which means, I have a tensor with size x, and I want just a part of him to have consistent parameters, that won’t be changed.
I was reading that this kind of “messing around” may cause this error.
What can I do to get this Functionality with no error?
Would it be possible to reset the particular part of your weight matrix after each iteration?
If so, you could wrap your code in a with torch.no_grad() statement and add if after optimizer.step() was performed.
What does it mean reset after each iteration?
I have a fully connected layer at the end of the network, and I want that those weights will have certain values, now I need to insert this value some how…
Sorry for not being clear enough.
I meant you could wrap your code sample in a with torch.no_grad(); block and just call it after each iteration in your training loop. This would make sure the weights have the desired values in the next iteration.
Maybe I’m not getting it right, but no matter what, I will need to initialize the weights some way, and as I understood from similar issues this is what makes the above problem, isn’t it ?