"TypeError: unhashable type:" for my torch.nn.Module

I have used the torch.optim.Adam Optimizer in my own model class ('LinkPredict). When I try to train my model, the following error is thrown:

  File "...", line 350, in train
    optimizer = th.optim.Adam(all_params, lr=learning_rate, weight_decay=weight_decay)
  File "/.../torch/optim/adam.py", line 48, in __init__
    super(Adam, self).__init__(params, defaults)
  File "/.../torch/optim/optimizer.py", line 45, in __init__
    param_groups = list(params)
  File ".../torch/nn/modules/module.py", line 1089, in parameters
    for name, param in self.named_parameters(recurse=recurse):
  File .../torch/nn/modules/module.py", line 1115, in named_parameters
    for elem in gen:
  File ".../torch/nn/modules/module.py", line 1059, in _named_members
    for module_prefix, module in modules:
  File ".../torch/nn/modules/module.py", line 1250, in named_modules
    if self not in memo:
TypeError: unhashable type: 'LinkPredict'

My model is defined as follows:

class LinkPredict(nn.Module):
    """...."""

    def __init__(self,
...
                 ):
        super(LinkPredict, self).__init__()

Do you have any idea why this error is thrown and how I can solve it?

Thank you a lot in advance!

how do you define it? could you share the code snippet?

@MrPositron Yes, for sure. I define all_params as follows:

# Model initialization
model = LinkPredict()
embed_layer = EmbedLayer()

# Parameter initialization
learning_rate: float = 0.01
weight_decay: float = 0.0

all_params = itertools.chain(model.parameters(), embed_layer.parameters())
optimizer = th.optim.Adam(all_params, lr=learning_rate, weight_decay=weight_decay)

I simplified the model initialization, but the model is the LinkPredict as mentioned above and the EmbedLayer is defined as follows:

class EmbedLayer(nn.Module):
    r"""Embedding layer.."""

    def __init__(self,
                 ...
):
        super(EmbedLayer, self).__init__()

Update: I commented the optimizer, so that I could circumvent the initialization problem, and surprisingly, it throws me a similar error again. So this has to be something more about the model, than the optimizer:

Traceback (most recent call last):
...
  File ".../deployer.py", line 383, in train
    model.train()
  File "/.../.conda/envs/deeplink/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1276, in train
    for module in self.children():
  File ".../.conda/envs/deeplink/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1172, in children
    for name, module in self.named_children():
  File "/.../.conda/envs/deeplink/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1191, in named_children
    if module is not None and module not in memo:
TypeError: unhashable type: 'BaseModel'

Since self._modules is checked in the function where the error is thrown, I tried that in my consolse. When I check for the self._modules, I get:

self._modules
Out[2]: 
OrderedDict([('rgcn',
              BaseModel(g=..., metagraph=..., h_dim=64, out_dim=32, device=device(type='cpu'), num_hidden_layers=1, dropout=0.0, use_self_loop=False)),
             ('predictor', ScorePredictor())])

Therefore, the modules are recognized. I don’t understand why my module is unhashable. Do you have any idea? @MrPositron

UPDATE: Error Solved!

I found the cause of the error:

My fault, I put a decorator from dataclasses in front of the BaseModel, and this is what caused the issue.

@dataclass
class BaseRGCNHetero(nn.Module):
...

So for everybody else having this issue, just do not use the @dataclass decorator for more than one class.

1 Like

If anyone is looking to make a module that is a dataclass, the following decoration seems to prevent overriding the default hash method:

@dataclass(eq=False)
3 Likes