What's num_chunks in rnncellbase class?

I am trying run old code written in PyTorch 0.3 in PyTorch 1.x. Parameter mismatch error occurred.

Traceback (most recent call last):
  File "main_LM.py", line 113, in <module>
    args.tied, args.hard, args.res)
  File "/content/PRPN/model_PRPN.py", line 31, in __init__
    self.reader = nn.ModuleList([ReadingNetwork(ninp, nhid, nslots, dropout=dropout, idropout=idropout), ] +
  File "/content/PRPN/ReadingNetwork.py", line 19, in __init__
    self.memory_rnn = LSTMCell(ninp, nout)
  File "/content/PRPN/LSTMCell.py", line 23, in __init__
    super(LSTMCell, self).__init__()
TypeError: __init__() missing 4 required positional arguments: 'input_size', 'hidden_size', 'bias', and 'num_chunks'

I checked original PyTorch 0.3 source code. There was no num_chunks parameter in RNNCellBase class. So what does this newly add parameter do?

1 Like

num_chunks is used to initialize the parameters in different RNN variants.
E.g. an LSTM uses num_chunks, as the weight and bias pararameters contain 4 parts as seen in the docs (take a look at the “Variables” section).

It seems your code uses a custom LSTMCell implementation, so you could either use the PyTorch LSTMCell, would need to update the code for your custom layer or keep your code and add the necessary code from the base classes into your current implementation.

1 Like