Conv 2D Parameters and accuracy

Hi - I am hoping to request help on Conv 2D Parameter set up for 3 * 32 * 32 CIFAR10 dataset .
My accuracy hovers around 80% and is there anything I can do to improve my accuracy. I cannot change the first input to FCC to anything other than 4096 as that is the requirement.
Specifically I am looking to request any feedback that I need to adjust for my hyperparameters or in/out channels.
torch.Size([64, 4096, 1, 1]) is the shape before FCC.
Batch Size is 64 and Epoch is 100.

import torch.nn as nn
import torch.nn.functional as F

class SimpleCNN(torch.nn.Module):

Parameters for convolution layers.

def __init__(self):
    super(SimpleCNN, self).__init__()
   
    self.conv1 = torch.nn.Conv2d(3, 32, kernel_size=3, stride=2, padding=0) 
    self.conv2 = torch.nn.Conv2d(32, 64, kernel_size=3, stride=2, padding=0)
    self.conv3 = torch.nn.Conv2d(64, 128, kernel_size=3, stride=2, padding=0)
    self.conv4 = torch.nn.Conv2d(128, 4096, kernel_size=3, stride=2, padding=0)
    
    self.fc1 = torch.nn.Linear(1 * 1 * 4096, 1024)
    self.fc2 = torch.nn.Linear(32 * 32 , 64)
    self.fc3 = torch.nn.Linear(64, 10)

    
def forward(self, x):
    x = F.relu(self.conv1(x))
    x = F.relu(self.conv2(x))
    x = F.relu(self.conv3(x))
    x = F.relu(self.conv4(x))
    print(x.shape)
    x = x.view(-1, 64 * 32 * 2)
    x = F.relu(self.fc1(x))
    x = F.relu(self.fc2(x))
    x = self.fc3(x)
    return(x)

firstCNN = SimpleCNN()
print (firstCNN)

# Loss
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(firstCNN.parameters(), lr=0.001)

# Train the model
  total_step = len(train_loader)
  n = 0 
  for epoch in range(epochs):
    avg_loss = 0
    sumloss = 0
    n = 0
    for i, (images, labels) in enumerate(train_loader):

    outputs = firstCNN(images)
    loss = criterion(outputs, labels)    
    
    # Backward and optimize
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    #scheduler.step()
    n +=1
    
    sumloss += loss
      
avg_loss = sumloss/n
print ('Epoch [{}/{}], Step [{}/{}],Loss: {:.4f},Avg Loss' 
              .format(epoch+1, epochs, i+1, total_step, loss.item()),avg_loss)           

Hoping to check for any possible ideas pleaseā€¦

Can you tell us more details? What is the training accuracy? If training accuracy is much higher than 80%, then there is a chance of overfitting. In that case, you could try adding a Dropout layer.

Also, do you do any data augmentation? You can take a look at different transformations for data augmentation: https://pytorch.org/docs/stable/torchvision/transforms.html?highlight=transforms

Thanks for your inputs. The training accuracy is just a shy over 80%. I have not added a drop out layer. I will do that and let you know the outcome. The data has been transformed to begin with.

Thanks -