NotimplementedError while densenet implementation

I tried to implement the densenet architecture trained on CIFAR and SVHN, which has 100 layers and 3 dense blocks, by changing parameters of densenet code in torchvision package.

Although, I could print my net architecture, I got an NotImplementedError in the very beginning when I ran the code.


NotImplementedError Traceback (most recent call last)
in ()
16 print(r_x.size())
17 t_y = torch.cat([r_y,l_y],1)
—> 18 output3 = net(r_x,l_x)
19 loss = loss_func(output3, t_y)
20 acc_loss += loss.data[0]

/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
222 for hook in self._forward_pre_hooks.values():
223 hook(self, input)
→ 224 result = self.forward(*input, **kwargs)
225 for hook in self._forward_hooks.values():
226 hook_result = hook(self, input, result)

/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py in forward(self, *input)
63 Should be overriden by all subclasses.
64 “”"
—> 65 raise NotImplementedError
66
67 def register_buffer(self, name, tensor):

NotImplementedError:

I have no idea what is going on.
Could anyone help me out with this?
Thanks a lot!!

1 Like

Hi,

It seems you have not defined a forward method.
All subclasses of nn.Module should overwrite that method.

Please have a look at the docs/examples.

4 Likes

OMG!

I checked my densenet class, and I found that I misspelled the word “forward”…
Now it can run normally.

Thank you!!

3 Likes

Haha I had misspelled ‘forward’ too. What are the odds? Anyway, this saved much time…

Misspelled forward as well…

I forgot to indent “def forward(self,x):”

The error can be read as: can’t find forward in the Class.

Haha I did that too. Thank you, your reply saved my time.

1 Like

lol i wrote “forwad”

🤦 lol, came to share the shame. Made the same spelling mistake, “foward”

Mismatch in Indentation also results in the same error.Make sure the definitions in the class item is of same indent.

i was like, how can everyone be misspelling forward? that’s not my error!

def foward(self,x):

killsself.

class Fashion_MNIST_NeuralNet(nn.Module) :

def init(self) :

#Calling the module class for the reason mentioned above...
super(Fashion_MNIST_NeuralNet,self).__init__()
self.fMnistConv1 = nn.Conv2d(in_channels = 1 , out_channels = 6 , kernel_size = 5)
self.fMnistConv2 = nn.Conv2d(in_channels = 6 , out_channels = 12 , kernel_size = 5)

#adding fully connected layers

self.fMnist_fc1 = nn.Linear(12*4*4 , 120)
self.fMnist_fc2 = nn.Linear(120 , 84)
self.fMnist_fc3 = nn.Linear(84 , 10)

def forward(self,x) :
#appplying max_pool2d layer to reduce the dimensionality of the matrix obtained from convolutions…
# Max pooling over a (2, 2) window
print(‘Forward pass’)
x = F.max_pool2d(F.relu(self.fMnistConv1(x)), (2, 2))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.fMnistConv2(x)), 2)
print("===x===1" , x.shape)
x = x.view(-1 , self.get_num_features(x))
print("==View==" , x.size())
x = F.relu(self.fMnist_fc1(x))
x = F.relu(self.fMnist_fc2(x))
x = self.fMnist_fc3(x)
return x

def get_num_features(self ,x):
print(‘Number of Features found=====’ , x.shape)
size = x.size()[1:]
print(“Size” , size)
num_features = 1
for s in size:
num_features*=s

print("==Mult==" , num_features)
return num_features

fn = Fashion_MNIST_NeuralNet()

Im getting an Error in this snippet

import torch.nn as nn
def fit_model(model , train_loader):
#Train Loader is used from the DataLoader done earlier…
optimizer = torch.optim.Adam(model.parameters())
‘’’
Parameters :-
conv1.weight torch.Size([6, 1, 5, 5])
conv1.bias torch.Size([6])
conv2.weight torch.Size([12, 6, 5, 5])
conv2.bias torch.Size([12])
fc1.weight torch.Size([120, 192])
fc1.bias torch.Size([120])
fc2.weight torch.Size([84, 120])
fc2.bias torch.Size([84])
fc3.weight torch.Size([10, 84])
fc3.bias torch.Size([10])
‘’’
error = nn.CrossEntropyLoss()
EPOCHS = 5
model.train()

for epoch in range(EPOCHS):

print(epoch)

for idx , (image , label) in enumerate(train_loader):
  
  var_X_batch = torch.randn(image.shape)
  print(var_X_batch.shape)
  var_y_batch = torch.tensor(label)
  
  optimizer.zero_grad()
  #passing the batch of images to the FNet_MNIST
  print('Debug 1')
  output = model(var_X_batch)
  print('Debug 2')
  loss = error(output, var_y_batch)
  print('Debug 2')
  loss.backward()
  #backpropagation....
  #Step gradient function....
  optimizer.step()
  
  predicted = torch.max(output.data, 1)[1] 
  correct += (predicted == var_y_batch).sum()
  #print(correct)
  if idx % 50 == 0:
    print('Epoch : {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}\t Accuracy:{:.3f}%'.
    format(
      epoch, idx*len(X_batch), len(train_loader.dataset), 100.*idx / len(train_loader), loss.data[0], float(correct*100) / float(BATCH_SIZE*(idx+1))))

Error : NotImplementedError:

Answered here.

I used to have this problem. I had difficult time to figure it out first. when I started looking deep into Model. I figure out that my forward function in the class was defined inside the init function. May be this could be the reason for your problem too.

1 Like

After 45 minutes wasting time, I finally saw your comment. Thank you