Problem with CNN

Hello. I have a problem with my small project.
I want to classify a RGB images to two classes.
Size image = 100x85.
Training set:
class 1 = 1212 images
class 2 = 1695 images
Test set:
class 1 = 131 images
class 2 = 128 images

Problem: accurate is always = 49%

import torch
import torchvision
import torchvision.transforms as transforms
from torchvision.datasets import ImageFolder
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoader
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import transforms, datasets

data = ImageFolder(root='img-train', transform= transforms.ToTensor())
data2 = ImageFolder(root='img-valid', transform= transforms.ToTensor())
trainloader = DataLoader(data)
testloader = DataLoader(data2)
classes = ('class1', 'class2')

class Net(nn.Module):

    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 320, kernel_size=5)
        self.conv2 = nn.Conv2d(320, 64, kernel_size=5)
        self.conv3 = nn.Conv2d(64, 1024, kernel_size=5)
        self.dropout = nn.Dropout2d()
        self.fc1 = nn.Linear(6, 500)
        self.fc2 = nn.Linear(500, 250)
        self.fc3 = nn.Linear(250, 2)

    def forward(self, x):
      
        x = F.relu(F.max_pool2d(self.conv1(x), 2))
        
     
        x = F.relu(F.max_pool2d(self.dropout(self.conv2(x)), 2))
        x = F.relu(F.max_pool2d(self.dropout(self.conv3(x)), 2))

        x = x.view(-1, 20480 )
     
        x = F.relu(self.fc1(x))
		
        x = F.dropout(x, training=self.training)
        x = F.relu(self.fc2(x))

        # 50 -> 10
        x = self.fc3(x)
        
        # transform to logits
        return F.log_softmax(x)
   
net=Net()
net.cuda() 
print(net)
import torch.optim as optim

criterion = nn.NLLLoss()
optimizer = optim.SGD(net.parameters(), lr=0.005, momentum=0.0022)

for epoch in range(1):  # loop over the dataset multiple times

    running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        # get the inputs
        inputs, labels = data

        # wrap them in Variable
        inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda())

        # zero the parameter gradients
        optimizer.zero_grad()

        # forward + backward + optimize
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        # print statistics
        running_loss += loss.data[0]
        if i % 20 == 19:    # print every 2000 mini-batches
            print('[%d, %5d] loss: %.20f' %
                  (epoch + 1, i + 1, running_loss / 20))
            running_loss = 0.0

print('Finished Training')
dataiter = iter(testloader)
images, labels = dataiter.next()


net.eval()

correct = 0
total = 0

for data in testloader:
	images, labels = data
	images=images.cuda()
	labels=labels.cuda()
	outputs = net(Variable(images))
	_, predicted = torch.max(outputs.data, 1)
	total += labels.size(0)
	correct += (predicted == labels).sum()
    
print('Correct: %d %%' % (
    100 * correct / total))


How is your training accuracy? Is it also at approx. 50%?
It’s a bit strange to use 320 filters in the first conv layer, than back to 64 and up again to 1024.
Did you want to use 640 instead?
If so, it also seems to be a bit high, but would maybe work better than the model now.
Could you plot your training loss and accuracy?
This would make it a bit more easy to debug.

Thank you for reply.

this my code now:

class Net(nn.Module):

    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 320, kernel_size=5)
        self.conv2 = nn.Conv2d(320, 640, kernel_size=5)
        self.conv3 = nn.Conv2d(640, 1024, kernel_size=5)
        self.dropout = nn.Dropout2d()
        self.fc1 = nn.Linear(64512, 500)
        self.fc2 = nn.Linear(500, 250)
        self.fc3 = nn.Linear(250, 2)

    def forward(self, x):    
        x = F.relu(F.max_pool2d(self.conv1(x), 2))          
        x = F.relu(F.max_pool2d(self.dropout(self.conv2(x)), 2))
        x = F.relu(F.max_pool2d(self.dropout(self.conv3(x)), 2))
        x = x.view(-1, 64512 )    
        x = F.relu(self.fc1(x))	
        x = F.dropout(x, training=self.training)
        x = F.relu(self.fc2(x))
        # 50 -> 10
        x = self.fc3(x)       
        # transform to logits
        return F.log_softmax(x)

and my output:

Net(
  (conv1): Conv2d(3, 320, kernel_size=(5, 5), stride=(1, 1))
  (conv2): Conv2d(320, 640, kernel_size=(5, 5), stride=(1, 1))
  (conv3): Conv2d(640, 1024, kernel_size=(5, 5), stride=(1, 1))
  (dropout): Dropout2d(p=0.5)
  (fc1): Linear(in_features=64512, out_features=500, bias=True)
  (fc2): Linear(in_features=500, out_features=250, bias=True)
  (fc3): Linear(in_features=250, out_features=2, bias=True)
)
testcnn.py:50: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
  return F.log_softmax(x)
[1,   100] loss: 0.69665333777666094139
[1,   200] loss: 0.00708028078079223598
[1,   300] loss: 0.00437617301940917969
[1,   400] loss: 0.00268048048019409180
[1,   500] loss: 0.00068682432174682617
[1,   600] loss: 0.00131387710571289067
[1,   700] loss: 0.00132138729095458989
[1,   800] loss: 0.00043724775314331056
[1,   900] loss: 0.00070947408676147461
[1,  1000] loss: 0.00035747289657592771
[1,  1100] loss: 0.00042194128036499023
[1,  1200] loss: 0.00050359964370727539
[1,  1300] loss: 1.56638206690549841582
[1,  1400] loss: 0.00618010759353637695
[1,  1500] loss: 0.00646384358406066912
[1,  1600] loss: 0.00293265581130981463
[1,  1700] loss: 0.00214604139328002921
[1,  1800] loss: 0.00116072893142700191
[1,  1900] loss: 0.00060119628906249998
[1,  2000] loss: 0.00109928846359252930
[1,  2100] loss: 0.00142927765846252437
[1,  2200] loss: 0.00041127204895019531
[1,  2300] loss: 0.00112100839614868173
[1,  2400] loss: 0.00148788094520568848
[1,  2500] loss: 0.00093175768852233882
[1,  2600] loss: 0.00042861700057983398
[1,  2700] loss: 0.00033172369003295896
[1,  2800] loss: 0.00043708086013793945
[1,  2900] loss: 0.00052500963211059575
Finished Training
Correct: 49 %

How is your training accuracy after the first epoch?

I changed code, but I still have only 49%

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from torchvision.datasets import ImageFolder
from collections import namedtuple
from torch.utils.data import DataLoader
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms


Params = namedtuple('Params', ['batch_size', 'test_batch_size', 'epochs', 'lr', 'momentum', 'seed', 'cuda', 'log_interval'])
args = Params(batch_size=64, test_batch_size=1000, epochs=10, lr=0.01, momentum=0.5, seed=1, cuda=False, log_interval=200)



data = ImageFolder(root='img-train', transform= transforms.ToTensor())
data2 = ImageFolder(root='img-valid', transform= transforms.ToTensor())
train_loader = DataLoader(data)
test_loader = DataLoader(data2)
classes = ('before', 'after')


class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 10, kernel_size=5)
        self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
        self.conv2_drop = nn.Dropout2d()
        self.fc1 = nn.Linear(7920, 50)
        self.fc2 = nn.Linear(50, 2)

    def forward(self, x):
        x = F.relu(F.max_pool2d(self.conv1(x), 2))
        x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
        x = x.view(-1, 7920)
        x = F.relu(self.fc1(x))
        x = F.dropout(x, training=self.training)
        x = self.fc2(x)
        return F.log_softmax(x)
      
model = Net()
model.share_memory() # gradients are allocated lazily, so they are not shared here
def train_epoch(epoch, args, model, data_loader, optimizer):
    model.train()
    for batch_idx, (data, target) in enumerate(data_loader):
        if args.cuda:
            data, target = data.cuda(), target.cuda()      
        data, target = Variable(data), Variable(target)
        optimizer.zero_grad()
        output = model(data)
        loss = F.nll_loss(output, target)
        loss.backward()
        optimizer.step()
        if batch_idx % args.log_interval == 0:
            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
                epoch, batch_idx * len(data), len(data_loader.dataset),
                100. * batch_idx / len(data_loader), loss.data[0]))


def test_epoch(model, data_loader):
    model.eval()
    test_loss = 0
    correct = 0
    for data, target in data_loader:
        if args.cuda:
            data, target = data.cuda(), target.cuda()      
        data, target = Variable(data, volatile=True), Variable(target)
        output = model(data)
        test_loss += F.nll_loss(output, target, size_average=False).data[0] # sum up batch loss
        pred = output.data.max(1)[1] # get the index of the max log-probability
        correct += pred.eq(target.data).cpu().sum()

    test_loss /= len(data_loader.dataset)
    print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
        test_loss, correct, len(data_loader.dataset),
        100. * correct / len(data_loader.dataset)))


# Run the training loop over the epochs (evaluate after each)
if args.cuda:
    model = model.cuda()
optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)
for epoch in range(1, args.epochs + 1):
    train_epoch(epoch, args, model, train_loader, optimizer)
    test_epoch(model, test_loader)    

and output:

Train Epoch: 1 [0/2510 (0%)]    Loss: 0.730912
Train Epoch: 1 [200/2510 (8%)]  Loss: 0.000000
Train Epoch: 1 [400/2510 (16%)] Loss: 0.393666
Train Epoch: 1 [600/2510 (24%)] Loss: 0.000000
Train Epoch: 1 [800/2510 (32%)] Loss: 0.000000
Train Epoch: 1 [1000/2510 (40%)]        Loss: 0.000000
Train Epoch: 1 [1200/2510 (48%)]        Loss: 0.000000
Train Epoch: 1 [1400/2510 (56%)]        Loss: 0.000161
Train Epoch: 1 [1600/2510 (64%)]        Loss: 0.000000
Train Epoch: 1 [1800/2510 (72%)]        Loss: 0.000000
Train Epoch: 1 [2000/2510 (80%)]        Loss: 0.000028
Train Epoch: 1 [2200/2510 (88%)]        Loss: 0.002456
Train Epoch: 1 [2400/2510 (96%)]        Loss: 0.000000

Test set: Average loss: 13.5654, Accuracy: 128/259 (49%)

Could you run additionally test_epoch with train_loader to see the resubstitution error?

Train Epoch: 1 [0/2510 (0%)]    Loss: 0.763600
Train Epoch: 1 [200/2510 (8%)]  Loss: 0.000000
Train Epoch: 1 [400/2510 (16%)] Loss: 0.000000
Train Epoch: 1 [600/2510 (24%)] Loss: 0.000000
Train Epoch: 1 [800/2510 (32%)] Loss: 0.000000
Train Epoch: 1 [1000/2510 (40%)]        Loss: 0.000000
Train Epoch: 1 [1200/2510 (48%)]        Loss: 0.000000
Train Epoch: 1 [1400/2510 (56%)]        Loss: 0.000004
Train Epoch: 1 [1600/2510 (64%)]        Loss: 0.000000
Train Epoch: 1 [1800/2510 (72%)]        Loss: 0.000000
Train Epoch: 1 [2000/2510 (80%)]        Loss: 0.000000
Train Epoch: 1 [2200/2510 (88%)]        Loss: 0.000000
Train Epoch: 1 [2400/2510 (96%)]        Loss: 0.000000

Test set: Average loss: 22.8103, Accuracy: 1299/2510 (52%)

Train Epoch: 2 [0/2510 (0%)]    Loss: 25.119247
Train Epoch: 2 [200/2510 (8%)]  Loss: 0.000971
Train Epoch: 2 [400/2510 (16%)] Loss: 0.000031
Train Epoch: 2 [600/2510 (24%)] Loss: 0.000000
Train Epoch: 2 [800/2510 (32%)] Loss: 0.003936
Train Epoch: 2 [1000/2510 (40%)]        Loss: 0.000000
Train Epoch: 2 [1200/2510 (48%)]        Loss: 0.070989
Train Epoch: 2 [1400/2510 (56%)]        Loss: 0.073976
Train Epoch: 2 [1600/2510 (64%)]        Loss: 0.000000
Train Epoch: 2 [1800/2510 (72%)]        Loss: 0.005794
Train Epoch: 2 [2000/2510 (80%)]        Loss: 0.000753
Train Epoch: 2 [2200/2510 (88%)]        Loss: 0.000000
Train Epoch: 2 [2400/2510 (96%)]        Loss: 0.000000

Test set: Average loss: 6.6127, Accuracy: 1299/2510 (52%)

Train Epoch: 3 [0/2510 (0%)]    Loss: 17.105192
Train Epoch: 3 [200/2510 (8%)]  Loss: 0.145598
Train Epoch: 3 [400/2510 (16%)] Loss: 0.018258
Train Epoch: 3 [600/2510 (24%)] Loss: 0.062921
Train Epoch: 3 [800/2510 (32%)] Loss: 0.054164
Train Epoch: 3 [1000/2510 (40%)]        Loss: 0.037550
Train Epoch: 3 [1200/2510 (48%)]        Loss: 0.008747
Train Epoch: 3 [1400/2510 (56%)]        Loss: 0.051795
Train Epoch: 3 [1600/2510 (64%)]        Loss: 0.003390
Train Epoch: 3 [1800/2510 (72%)]        Loss: 0.016323
Train Epoch: 3 [2000/2510 (80%)]        Loss: 0.022865
Train Epoch: 3 [2200/2510 (88%)]        Loss: 0.004945
Train Epoch: 3 [2400/2510 (96%)]        Loss: 0.001030

Test set: Average loss: 2.9158, Accuracy: 1299/2510 (52%)

Train Epoch: 4 [0/2510 (0%)]    Loss: 7.015765
Train Epoch: 4 [200/2510 (8%)]  Loss: 0.134741
Train Epoch: 4 [400/2510 (16%)] Loss: 0.049476
Train Epoch: 4 [600/2510 (24%)] Loss: 0.013536
Train Epoch: 4 [800/2510 (32%)] Loss: 0.008116
Train Epoch: 4 [1000/2510 (40%)]        Loss: 0.041024
Train Epoch: 4 [1200/2510 (48%)]        Loss: 0.001711
Train Epoch: 4 [1400/2510 (56%)]        Loss: 0.087386
Train Epoch: 4 [1600/2510 (64%)]        Loss: 0.019133
Train Epoch: 4 [1800/2510 (72%)]        Loss: 0.002450
Train Epoch: 4 [2000/2510 (80%)]        Loss: 0.134814
Train Epoch: 4 [2200/2510 (88%)]        Loss: 0.001560
Train Epoch: 4 [2400/2510 (96%)]        Loss: 0.009285

Test set: Average loss: 3.0047, Accuracy: 1299/2510 (52%)

Train Epoch: 5 [0/2510 (0%)]    Loss: 2.197697
Train Epoch: 5 [200/2510 (8%)]  Loss: 0.142283
Train Epoch: 5 [400/2510 (16%)] Loss: 0.062617
Train Epoch: 5 [600/2510 (24%)] Loss: 0.022220
Train Epoch: 5 [800/2510 (32%)] Loss: 0.016140
Train Epoch: 5 [1000/2510 (40%)]        Loss: 0.009101
Train Epoch: 5 [1200/2510 (48%)]        Loss: 0.011840
Train Epoch: 5 [1400/2510 (56%)]        Loss: 0.233425
Train Epoch: 5 [1600/2510 (64%)]        Loss: 0.041333
Train Epoch: 5 [1800/2510 (72%)]        Loss: 0.080752
Train Epoch: 5 [2000/2510 (80%)]        Loss: 0.010446
Train Epoch: 5 [2200/2510 (88%)]        Loss: 0.001828
Train Epoch: 5 [2400/2510 (96%)]        Loss: 0.018279

Test set: Average loss: 2.6521, Accuracy: 1299/2510 (52%)

Train Epoch: 6 [0/2510 (0%)]    Loss: 5.629133
Train Epoch: 6 [200/2510 (8%)]  Loss: 0.111943
Train Epoch: 6 [400/2510 (16%)] Loss: 0.078148
Train Epoch: 6 [600/2510 (24%)] Loss: 0.030254
Train Epoch: 6 [800/2510 (32%)] Loss: 0.018951
Train Epoch: 6 [1000/2510 (40%)]        Loss: 0.007060
Train Epoch: 6 [1200/2510 (48%)]        Loss: 0.006947
Train Epoch: 6 [1400/2510 (56%)]        Loss: 0.146729
Train Epoch: 6 [1600/2510 (64%)]        Loss: 0.037547
Train Epoch: 6 [1800/2510 (72%)]        Loss: 0.008593
Train Epoch: 6 [2000/2510 (80%)]        Loss: 0.061359
Train Epoch: 6 [2200/2510 (88%)]        Loss: 0.010392
Train Epoch: 6 [2400/2510 (96%)]        Loss: 0.003662

Test set: Average loss: 2.4896, Accuracy: 1299/2510 (52%)

Train Epoch: 7 [0/2510 (0%)]    Loss: 5.354959
Train Epoch: 7 [200/2510 (8%)]  Loss: 0.219744
Train Epoch: 7 [400/2510 (16%)] Loss: 0.047250
Train Epoch: 7 [600/2510 (24%)] Loss: 0.033134
Train Epoch: 7 [800/2510 (32%)] Loss: 0.006888
Train Epoch: 7 [1000/2510 (40%)]        Loss: 0.041285
Train Epoch: 7 [1200/2510 (48%)]        Loss: 0.010135
Train Epoch: 7 [1400/2510 (56%)]        Loss: 0.185328
Train Epoch: 7 [1600/2510 (64%)]        Loss: 0.030203
Train Epoch: 7 [1800/2510 (72%)]        Loss: 0.068935
Train Epoch: 7 [2000/2510 (80%)]        Loss: 0.021755
Train Epoch: 7 [2200/2510 (88%)]        Loss: 0.013778
Train Epoch: 7 [2400/2510 (96%)]        Loss: 0.001412

Test set: Average loss: 2.3966, Accuracy: 1299/2510 (52%)

Train Epoch: 8 [0/2510 (0%)]    Loss: 5.587163
Train Epoch: 8 [200/2510 (8%)]  Loss: 0.201555
Train Epoch: 8 [400/2510 (16%)] Loss: 0.057020
Train Epoch: 8 [600/2510 (24%)] Loss: 0.030785
Train Epoch: 8 [800/2510 (32%)] Loss: 0.045294
Train Epoch: 8 [1000/2510 (40%)]        Loss: 0.009532
Train Epoch: 8 [1200/2510 (48%)]        Loss: 0.033173
Train Epoch: 8 [1400/2510 (56%)]        Loss: 0.242416
Train Epoch: 8 [1600/2510 (64%)]        Loss: 0.054072
Train Epoch: 8 [1800/2510 (72%)]        Loss: 0.014919
Train Epoch: 8 [2000/2510 (80%)]        Loss: 0.028255
Train Epoch: 8 [2200/2510 (88%)]        Loss: 0.014499
Train Epoch: 8 [2400/2510 (96%)]        Loss: 0.004550

Test set: Average loss: 2.2503, Accuracy: 1299/2510 (52%)

Train Epoch: 9 [0/2510 (0%)]    Loss: 4.205160
Train Epoch: 9 [200/2510 (8%)]  Loss: 0.219553
Train Epoch: 9 [400/2510 (16%)] Loss: 0.076956
Train Epoch: 9 [600/2510 (24%)] Loss: 0.041252
Train Epoch: 9 [800/2510 (32%)] Loss: 0.011788
Train Epoch: 9 [1000/2510 (40%)]        Loss: 0.019258
Train Epoch: 9 [1200/2510 (48%)]        Loss: 0.025886
Train Epoch: 9 [1400/2510 (56%)]        Loss: 0.264767
Train Epoch: 9 [1600/2510 (64%)]        Loss: 0.064278
Train Epoch: 9 [1800/2510 (72%)]        Loss: 0.035067
Train Epoch: 9 [2000/2510 (80%)]        Loss: 0.014171
Train Epoch: 9 [2200/2510 (88%)]        Loss: 0.017240
Train Epoch: 9 [2400/2510 (96%)]        Loss: 0.010370

Test set: Average loss: 2.1646, Accuracy: 1299/2510 (52%)

Train Epoch: 10 [0/2510 (0%)]   Loss: 3.983953
Train Epoch: 10 [200/2510 (8%)] Loss: 0.247857
Train Epoch: 10 [400/2510 (16%)]        Loss: 0.079800
Train Epoch: 10 [600/2510 (24%)]        Loss: 0.032828
Train Epoch: 10 [800/2510 (32%)]        Loss: 0.014672
Train Epoch: 10 [1000/2510 (40%)]       Loss: 0.015492
Train Epoch: 10 [1200/2510 (48%)]       Loss: 0.006651
Train Epoch: 10 [1400/2510 (56%)]       Loss: 0.244603
Train Epoch: 10 [1600/2510 (64%)]       Loss: 0.065021
Train Epoch: 10 [1800/2510 (72%)]       Loss: 0.027480
Train Epoch: 10 [2000/2510 (80%)]       Loss: 0.019957
Train Epoch: 10 [2200/2510 (88%)]       Loss: 0.013186
Train Epoch: 10 [2400/2510 (96%)]       Loss: 0.006839

Test set: Average loss: 2.1206, Accuracy: 1299/2510 (52%)

If this is the training accuracy, your model doesn’t seem to learn anything useful.
In that case scale the problem down to just one single image and try to overfit your model on it.

I’m trying overfitting, but my network don’t learning. Maybe challange is too dificult, I want to classify boobs - Before and after Breast augmentation.
Here is my data set: