I used to optimize a data tensor after marking it with ( requires_grad= True) and adding it to the optimizer which is defined in the training loop in the same (file.py).
Now I would like to add to this optimization process another data tensor in another different (file.py), how can I add it to the same optimizer?
Thanks a lot for your information, I have I have tried to make it and included that tensor in the optimization process. In the part of the attached code, that data tensor that I wanted to include in the optimization process is the (data_words) similarly to (data_phrases). Data_phrases is optimized successfully and data is updated, so that when I receive the output tensor ( New_phrases_optimized) data is clearly different. However, for (data_words) it is never optimized, so the input data is exactly the same as the output data. Is there any mistake here?
(reminder: the other part of the code is about softmax regression for the phrases, so that the purpose of having words here is only to include in the optimisation process). Should I add (data_words) to the optimizer in a different way??
Could you check the .grad attribute of data_words after the backward call?
Also, Variables are deprecated since PyTorch 0.4.0, so you can use tensors directly now.
The usual approach would be to pass the parameters as a list to the initialization of your optimizer, but your approach seems to work for data_phrases.
and replaced (data_phrases) with (data_words) and when I checked the gradient of (data_words) it was showing values and not “None” while (data_phrases) was the one whose .grad gives “None” this time.
So I guess I need to pass the parameters to the optimizer in a different way? May be in one instruction together?
that i have before the training loop. If I use data_phrases and data_words before they are being defined, there might be an error.
Cant I do something like “self.optimizer.add_param_group({“params”: [data_phrases, data_words]})” in the training phase? I tried that but still not working!