Weird NotImplementedError

I am trying to make some changes to RepLKNet published on CVPR2022. So I seperate the network into several parts by using ‘.children’. My codes are shown below

import torch
import torch.nn as nn
import torchvision.utils
import torchvision.models as tv_models

from networks import replknet 

if __name__ == "__main__":

   basenet = replknet.create_RepLKNet31B(small_kernel_merged=False,use_checkpoint=True)

   self_stem_block = list(basenet.children())[0]
   self_main_block_0 = list(basenet.children())[1][0]
   self_main_block_1 = list(basenet.children())[1][1]
   self_main_block_2 = list(basenet.children())[1][2]
   self_main_block_3 = list(basenet.children())[1][3]
   self_out_conv = list(basenet.children())[2]
   self_sync_bn = list(basenet.children())[3]
   self_avg_pool = list(basenet.children())[4]
   self_classifier = list(basenet.children())[5]

   x = torch.ones(1,3,224,224).cuda()

   x = self_stem_block(x)

replknet is the codes of RepLKNet which can be found RepLKNet-pytorch/replknet.py at main · DingXiaoH/RepLKNet-pytorch · GitHub

The it gives the error

Traceback (most recent call last):
  File "test_load_model.py", line 66, in <module>
    x = self_stem_block(x)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 201, in _forward_unimplemented
    raise NotImplementedError
NotImplementedError

It seems some of the module in RepLKNet does not have forward method implementation.

Anyone knows why? Any suggestion is appreciated.

Probably print the self_stem_block and see what module it is.
If its nn.ModuleList(), there is no forward method for it.
Check RepLKNet’s forward() method to see how they extract the features and try to follow the same.

Thank you for your reply. Following your suggestion, I changed my codes so that each submodule of the modulelist can be acquired. But it gives me another error as below

import torch
import torch.nn as nn
import torchvision.utils
import torchvision.models as tv_models

from networks import replknet 

if __name__ == "__main__":

   basenet = replknet.create_RepLKNet31B(small_kernel_merged=False,use_checkpoint=True)

   self_stem_block = list(basenet.children())[0]
   self_main_block_0 = list(basenet.children())[1][0]
   self_main_block_1 = list(basenet.children())[1][1]
   self_main_block_2 = list(basenet.children())[1][2]
   self_main_block_3 = list(basenet.children())[1][3]
   self_out_conv = list(basenet.children())[2]
   self_sync_bn = list(basenet.children())[3]
   self_avg_pool = list(basenet.children())[4]
   self_classifier = list(basenet.children())[5]

   x = torch.ones(1,3,224,224).cuda()

  self_stem_block_0 = self_stem_block[0]

  self_stem_block_0.cuda()

  x = self_stem_block_0(x)

Traceback (most recent call last):
File "test_load_model_rlk.py", line 86, in <module>
  x = self_stem_block_0(x)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
  return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 141, in forward
  input = module(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
  return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/batchnorm.py", line 732, in forward
  world_size = torch.distributed.get_world_size(process_group)
File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 845, in get_world_size
  return _get_group_size(group)
File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 306, in _get_group_size
  default_pg = _get_default_group()
File "/usr/local/lib/python3.6/dist-packages/torch/distributed/distributed_c10d.py", line 411, in _get_default_group
  "Default process group has not been initialized, "
RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.