RuntimeError: The size of tensor a (32) must match the size of tensor b (16) at non-singleton dimension 3

Why if NUM_OF_CELLS variable is increased from 8 to 16, the following error pops up ?

/home/phung/PycharmProjects/venv/py39/bin/python /home/phung/PycharmProjects/beginner_tutorial/gdas.py
Files already downloaded and verified
Files already downloaded and verified
run_num =  0
Entering train_NN(), forward_pass_only =  0
modules =  <generator object Module.named_children at 0x7fa359f57e40>
c =  0  , n =  0  , cc =  0  , e =  0
Traceback (most recent call last):
  File "/home/phung/PycharmProjects/beginner_tutorial/gdas.py", line 841, in <module>
    ltrain = train_NN(forward_pass_only=0)
  File "/home/phung/PycharmProjects/beginner_tutorial/gdas.py", line 605, in train_NN
    NN_output = graph.forward(NN_input)
  File "/home/phung/PycharmProjects/beginner_tutorial/gdas.py", line 353, in forward
    self.cells[c].nodes[n].connections[
RuntimeError: The size of tensor a (32) must match the size of tensor b (16) at non-singleton dimension 3

Process finished with exit code 1

If I further increase it to 32, then the previous error above about dimension mismatch is somehow gone, but the following new error appears.

[W python_anomaly_mode.cpp:104] Warning: Error detected in LogSoftmaxBackward0. Traceback of forward call that caused the error:
  File "/home/phung/PycharmProjects/beginner_tutorial/gdas.py", line 843, in <module>
    ltrain = train_NN(forward_pass_only=0)
  File "/home/phung/PycharmProjects/beginner_tutorial/gdas.py", line 622, in train_NN
    Ltrain = criterion(NN_output, NN_train_labels)
  File "/home/phung/PycharmProjects/venv/py39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/phung/PycharmProjects/venv/py39/lib/python3.9/site-packages/torch/nn/modules/loss.py", line 1150, in forward
    return F.cross_entropy(input, target, weight=self.weight,
  File "/home/phung/PycharmProjects/venv/py39/lib/python3.9/site-packages/torch/nn/functional.py", line 2846, in cross_entropy
    return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
 (function _print_stack)
Traceback (most recent call last):
  File "/home/phung/PycharmProjects/beginner_tutorial/gdas.py", line 843, in <module>
    ltrain = train_NN(forward_pass_only=0)
  File "/home/phung/PycharmProjects/beginner_tutorial/gdas.py", line 632, in train_NN
    Ltrain.backward()
  File "/home/phung/PycharmProjects/venv/py39/lib/python3.9/site-packages/torch/_tensor.py", line 307, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "/home/phung/PycharmProjects/venv/py39/lib/python3.9/site-packages/torch/autograd/__init__.py", line 154, in backward
    Variable._execution_engine.run_backward(
RuntimeError: Function 'LogSoftmaxBackward0' returned nan values in its 0th output.
tensor(1., device='cuda:0')

Process finished with exit code 1