How can I add a variable in a network definition that gets tuned during training?

Thanks a lot I really appreciate it, however, I need to write a very simple example for myself so that I can fully get how everything works.
There is an example in the docs that shows how it is done in a manual fashion. that is writing the optimization steps manually (calculating the gradients and then updating the variables respectively).
However, I should still be able to set some parameters in my model right? so that when I send my model.parameters() to an optimizer, it gets optimized accordingly.
So basically I should be able create or define a new Parameter in my module first and then add it to my module, by registering it as a module attribute, something like this :

myvar = torch.tensor(0, dtype=torch.float32, requires_grad=True)
myparam = torch.nn.Parameter(myvar) 
mymodel.register_parameter( param_name , myparam )

I assume, this should add my new parameters to the module list of parameters. is this assumption correct? since according to the documentation for Parameter:

A kind of Tensor that is to be considered a module parameter.

Parameters are Tensor subclasses, that have a very special property when used with Modules
when they’re assigned as Module attributes they are automatically added to the list of its parameters, and will appear e.g. in parameters() iterator. Assigning a Tensor doesn’t have such effect. This is because one might want to cache some temporary state, like last hidden state of the RNN, in the model.
If there was no such class as Parameter, these temporaries would get registered too.

Update :
For the answer click here!