Assume, I have created an object as is described in
Extending AutoGrad - Extending torch
with a Tensor
wrapper type
I do not need any checks, I just need it to be passed to a custom AutogradFunction that calculates the backward (again without any checks of the type or other things)
and be recognized by a custom optimizer as a parameter. The nn.Parameter
does not work and self.register_parameter
also requires parameters.
Is there any way, to brute-force write the tensor as a parameter such that it appears in the model.parameter()
list and gets the update signal by the optimizer?