Hello, I created the following somewhat succinct neural net (code cited later on):
Now the original version uses a simple two layer network, but I was trying to make a larger version using an input layer, a module list, then an output layer. However, it seems the “loss” function is not implemented for a sequential NN if it has a module list (I think), as I am getting an error when using the larger neural net with a label of “Not Implemented Error” on the “loss_fun” call.
The initialization of the nn is the only step I changed in the code. You can see the before version in the code (uncommented), and the after in the code as well (commented) Before:
class FirstNetwork_v2(nn.Module):
def init(self):
super().init()
torch.manual_seed(0)
self.net = nn.Sequential( #sequential operation
nn.Linear(2, 2),
nn.Sigmoid(),
nn.Linear(2, 4),
nn.Softmax())
After:
class FirstNetwork_v2(nn.Module):
def init(self):
super().init()
torch.manual_seed(0)
self.net = nn.Sequential(
nn.Linear(2, 10, bias=False),
nn.ModuleList(),
nn.Linear(10, 4, bias=False),
nn.Softmax())
for _ in range(3):
self.net[1].append(nn.Linear(10, 10, bias=False))
self.net[1].append(nn.ReLU())
In conclusion, is there any loss function that allows me to use more complicated neural nets? Thank you very much!
In the spirit of citing original work, this neural net initially used the following guide: