I was trying to adapt the training loop to work for torch::jit::script::Module
so I needed the same interface that the optimizer expects. However, I realized that it accepts std::vector<at::Tensor>
, so I actually don’t have to create slot_list_impl
after all.
Maybe you can take a look at what I was trying to do here jit::script::Module parameters are not updating when training
I think for the code to actually work, I need to use register_parameter
or set_parameter
to push the updated tensor back into the script::Module
after the optimizer has acted on it. Does that sound right? Or could there be a way that the optimizer can update the tensor values in place?