Unable to Use Loss Function

Hello, I created the following somewhat succinct neural net (code cited later on):

Now the original version uses a simple two layer network, but I was trying to make a larger version using an input layer, a module list, then an output layer. However, it seems the “loss” function is not implemented for a sequential NN if it has a module list (I think), as I am getting an error when using the larger neural net with a label of “Not Implemented Error” on the “loss_fun” call.

The initialization of the nn is the only step I changed in the code. You can see the before version in the code (uncommented), and the after in the code as well (commented) Before:
class FirstNetwork_v2(nn.Module):
def init(self):
super().init()
torch.manual_seed(0)
self.net = nn.Sequential( #sequential operation
nn.Linear(2, 2),
nn.Sigmoid(),
nn.Linear(2, 4),
nn.Softmax())

After:
class FirstNetwork_v2(nn.Module):
def init(self):
super().init()
torch.manual_seed(0)

    self.net = nn.Sequential(
      nn.Linear(2, 10, bias=False),
      nn.ModuleList(),
      nn.Linear(10, 4, bias=False),
      nn.Softmax())

    for _ in range(3):
      self.net[1].append(nn.Linear(10, 10, bias=False))
      self.net[1].append(nn.ReLU())

In conclusion, is there any loss function that allows me to use more complicated neural nets? Thank you very much!

In the spirit of citing original work, this neural net initially used the following guide:

The error is raised by nn.ModuleList, which doesn’t implement a forward method, but acts as a list with the additional property to register all parameters and buffers into the parent nn.Module.
You could use another nn.Sequential container and .add_module instead:

net = nn.Sequential(
    nn.Linear(2, 10, bias=False),
    nn.Sequential(),
    nn.Linear(10, 4, bias=False),
    nn.Softmax(dim=1))

for i in range(3):
    net[1].add_module(str(i), nn.Linear(10, 10, bias=False))
    net[1].add_module(str(i)+'relu', nn.ReLU())
    
x = torch.randn(1, 2)
out = net(x)
1 Like