What I need to change the CNN model for a regression problem

I have a CNN model to classify the MNIST digits problem. The model has 10 output nodes, used CrossEntropyLoss as a loss function and Adam optimizer. The structure of the model is given below

class CNN(nn.Module):
    def __init__(self):
        super(CNN, self).__init__()
        self.conv1 = nn.Sequential(         
            nn.Conv2d(
                in_channels=1,              
                out_channels=16,            
                kernel_size=5,              
                stride=1,                   
                padding=2,                  
            ),                              
            nn.ReLU(),                      
            nn.MaxPool2d(kernel_size=2),    
        )
        self.conv2 = nn.Sequential(         
            nn.Conv2d(16, 32, 5, 1, 2),     
            nn.ReLU(),                      
            nn.MaxPool2d(2),                
        )
        # fully connected layer, output 10 classes
        self.out = nn.Linear(32 * 7 * 7, 10)
        # self.softmax = torch.nn.Softmax(dim=1)
    def forward(self, x):
        x = self.conv1(x)
        x = self.conv2(x)
        # flatten the output of conv2 to (batch_size, 32 * 7 * 7)
        x = x.view(x.size(0), -1)       
        output = self.out(x)
        # output = self.softmax(output)
        return output, x    # return x for visualization

As, we can use CNN to predict the house price prediction from images. Now, I want to consider this classification problem as a regression problem. I want to predict the digit as a number (whatever the model predicts, i.e, 1.0, 0.5, 8.5, etc). To do this I need only 1 output neuron instead of 10.

If I change this line for the expected output (1 regression output), I am getting an error (as the target labels are from 0 to 9).

# fully connected layer, 1 output
self.out = nn.Linear(32 * 7 * 7, 1)

Error

IndexError: Target 2 is out of bounds.

Could you tell me what I have to do, if I want to predict only one number as digits? Moreover, do I need to change the loss function to MSELoss and optimizer to SGD?

The data-loader

def data_loaders():
    train_data = datasets.MNIST(
        root = 'data',
        train = True,                         
        transform = transforms.ToTensor(), 
        download = True,            
    )
    test_data = datasets.MNIST(
        root = 'data', 
        train = False, 
        transform = transforms.ToTensor()
    )

    train_test_loaders = {
        'train' : torch.utils.data.DataLoader(train_data, 
                                            batch_size=100, 
                                            shuffle=True, 
                                            num_workers=1),
        
        'test'  : torch.utils.data.DataLoader(test_data, 
                                            batch_size=100, 
                                            shuffle=True, 
                                            num_workers=1),
    }
    return train_test_loaders

Later, at the time of training

def train(NB_EPOCS, model, loaders):
    model.train()
        
    # Train the model
    total_step = len(loaders['train'])
    for epoch in range(NB_EPOCS):
        for i, (images, labels) in enumerate(loaders['train']):
            b_x = Variable(images)   # batch x
            b_y = Variable(labels)   # batch y
            print(np.shape(b_x), np.shape(b_y))
            output = model(b_x)[0]               
            loss = criterion(output, b_y)

The shape of the b_x and b_y is

torch.Size([100, 1, 28, 28]) torch.Size([100])

Hi Akib!

Perhaps some of the posts in the following thread could be helpful:

Best.

K. Frank

@KFrank the link you gave me (this is my post and I haven’t received an answer for the given question)