Why my network converges slowly

Hello, I use a simple CNN with 70000 parameters and I use the Tiny-Imagenet dataset
my preprocessing is like below

       train_batch_size = 128
        test_batch_size = 128
        lr = 0.001
        momentum = 0.9
        weight_decay = 0
        seed = 7
        margin = 1.0
        log_interval = 10
        resume = '-r'
        epochs = 10
        normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225])
        transform_train = transforms.Compose([
                transforms.Resize(32),
                transforms.RandomHorizontalFlip(),
                transforms.ToTensor(),
                normalize])
                    
        transform_test = transforms.Compose([
                transforms.Resize(32),
                transforms.ToTensor(),
                normalize])
        train_dataset = datasets.ImageFolder('./tiny-imagenet-200/train', transform=transform_train)
        test_dataset = datasets.ImageFolder('./tiny-imagenet-200/val', transform=transform_test)
        train_loader = torch.utils.data.DataLoader(train_dataset, batch_size = train_batch_size, shuffle = True, num_workers=0)    
        test_loader = torch.utils.data.DataLoader(test_dataset, batch_size = test_batch_size, shuffle = False, num_workers=0)

after 100 epochs accuracy is equal to 23% and loss is 3% my optimizer is SGD. I used a deeper network like Alesxnet but my result didn’t change.
what’s wrong with my training. Any help would be appreciated.