# CNN shows no learning

``````import time as t
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from random import randint
import numpy as np

class Net(nn.Module):

def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.mp = nn.MaxPool2d(2)
self.fc = nn.Linear(1620, 2)

def forward(self, x):
in_size = x.size(0)
x = F.relu(self.mp(self.conv1(x)))
x = F.relu(self.mp(self.conv2(x)))
x = x.view(in_size, -1)  # flatten the tensor
x = self.fc(x)

return F.log_softmax(x,dim = 1)

net = Net()

counter = 0
losses = []
for epoch in range(1):
err =  []
for d in data:
x,y = d
x = x/255 # normalizing the i/p image
X = torch.Tensor([[x]]) # a 1x50x50 tensor
Y = torch.Tensor([[y]]) # a 1x1 tensor
Y = Y.long()
output = net(X)
loss = F.nll_loss(output, Y)
losses.append(float(loss))
counter += 1
err.append(loss)
if counter%20 == 0: # backprop every 20 iters
k = torch.stack(err).mean()
counter = 0
err = []
k.backward()
optimizer.step()
print(loss)
``````

I wrote this cnn to solve the cats v dogs dataset. The loss remains more or less constant for @ ~0.7.
This is my first crack at CNNs. I’m just training in a batch size of one initially. You can find the numpy file here -> https://drive.google.com/file/d/1WF_Bti7x2K93AmUrIje8RGVmrBco71_S/view?usp=sharing

Hey, I have a question. I don’t quite understand what you are backpropagating every 20 iterations. Do you want a batch size of 20? Also, not related to the question but why not normalize the input before training itself? Try changing your model around a bit. Try changing the learning rate as well. It’s okay if your loss goes to infinity. At least, you’ll know something is happening.

The 20 iterations thing was just to mimic using a batch of 20.
I’ve messed around with the learning rates and still don’t seem to make any progress.
So I’m still kind of confused on where the issue is.

Check the gradients and weights, are they updating? This is fairly common in deep learning. Your error won’t decrease at all. Try a different architecture, kernel size. For image classification you might need more layers, Try batch normalization to see if that helps.