Sometimes, we need to create a module with learnable parameters. For example, when we construct a-softmax module, we need the module contains a weight W which should be learnt and updated during the process of training.
By looking at the docs, it seems that I should use it like this:
1.you need to run __init__ for mod's base class
2. the parameters should be wrapped within a torch.nn.Parameter
3. torch.random is a module, say you are using torch.randn for normal distribution, then putting these together:
File “/home/zhangzy/.local/lib/python3.5/site-packages/torch/optim/optimizer.py”, line 192, in add_param_group
"but one of the params is " + torch.typename(param))
TypeError: optimizer can only optimize Tensors, but one of the params is Module.parameters
Have a look here. *.parameters() creates a generator, normally we would do torch.optim.Adam(loss.parameters(),lr = ...) when dealing with just one set of parameters, but here since you have 2 sets you will need to make a list out of one generator and extend it: