I have used the torch.optim.Adam Optimizer in my own model class ('LinkPredict). When I try to train my model, the following error is thrown:
File "...", line 350, in train
optimizer = th.optim.Adam(all_params, lr=learning_rate, weight_decay=weight_decay)
File "/.../torch/optim/adam.py", line 48, in __init__
super(Adam, self).__init__(params, defaults)
File "/.../torch/optim/optimizer.py", line 45, in __init__
param_groups = list(params)
File ".../torch/nn/modules/module.py", line 1089, in parameters
for name, param in self.named_parameters(recurse=recurse):
File .../torch/nn/modules/module.py", line 1115, in named_parameters
for elem in gen:
File ".../torch/nn/modules/module.py", line 1059, in _named_members
for module_prefix, module in modules:
File ".../torch/nn/modules/module.py", line 1250, in named_modules
if self not in memo:
TypeError: unhashable type: 'LinkPredict'
My model is defined as follows:
class LinkPredict(nn.Module):
"""...."""
def __init__(self,
...
):
super(LinkPredict, self).__init__()
Do you have any idea why this error is thrown and how I can solve it?
Update: I commented the optimizer, so that I could circumvent the initialization problem, and surprisingly, it throws me a similar error again. So this has to be something more about the model, than the optimizer:
Traceback (most recent call last):
...
File ".../deployer.py", line 383, in train
model.train()
File "/.../.conda/envs/deeplink/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1276, in train
for module in self.children():
File ".../.conda/envs/deeplink/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1172, in children
for name, module in self.named_children():
File "/.../.conda/envs/deeplink/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1191, in named_children
if module is not None and module not in memo:
TypeError: unhashable type: 'BaseModel'
Since self._modules is checked in the function where the error is thrown, I tried that in my consolse. When I check for the self._modules, I get: