Training Loss barely changes

I am currently trying to use transfer learning with resnet model. I am running the below code for 10 epochs. The training and validation loss seem to barely, barely change if at all. Not sure why this is.

    for epoch in range(1, num_epochs+1):
        print('Epoch {}/{}'.format(epoch, num_epochs))
        print('-' * 10)

        # Each epoch has a training and validation phase
        for phase in ['train', 'valid']:
            if phase == 'train':
                m.train()  # Set model to training mode
                dataloader = train_data
            else:
                m.eval()   # Set model to evaluate mode
                dataloader = valid_data

            running_loss = 0.0
            running_corrects = 0

            # Iterate over data.
            for inputs, labels in dataloader:
                inputs, labels = inputs.to(device), labels.to(device)

                # zero the parameter gradients
                optimizer.zero_grad()

                # forward
                # track history if only in train
                with torch.set_grad_enabled(phase == 'train'):
                    outputs = m(inputs)
                    loss = criterion(outputs, labels)
                    _, preds = torch.max(outputs, 1)

                    # backward + optimize only if in training phase
                    if phase == 'train':
                        loss.backward()
                        optimizer.step()
                        scheduler.step()

                # statistics
                running_loss += loss.item() * inputs.size(0)
                running_corrects += torch.sum(preds == labels.data)
                
            if phase == 'train':
                epoch_loss = running_loss / train_data_size
                epoch_acc = running_corrects.double() / train_data_size
            else:
                epoch_loss = running_loss / valid_data_size
                epoch_acc = running_corrects.double() / valid_data_size

            

            print('{} Loss: {:.4f} Acc: {:.4f}'.format(
                phase, epoch_loss, epoch_acc))

            # deep copy the model
            if phase == 'valid' and epoch_acc > best_acc:
                best_acc = epoch_acc
                best_model_wts = copy.deepcopy(m.state_dict())

This code produces the following output:

Epoch 1/5
----------
train Loss: 1.2359 Acc: 0.6404
valid Loss: 1.2904 Acc: 0.4593

Epoch 2/5
----------
train Loss: 1.2190 Acc: 0.6592
valid Loss: 1.2884 Acc: 0.4593

Epoch 3/5
----------
train Loss: 1.2176 Acc: 0.6565
valid Loss: 1.2878 Acc: 0.4593

Epoch 4/5
----------
train Loss: 1.2233 Acc: 0.6537
valid Loss: 1.2882 Acc: 0.4593

Epoch 5/5
----------
train Loss: 1.2163 Acc: 0.6594
valid Loss: 1.2883 Acc: 0.4593

Any help or suggestions are appreciated.