Model doesn't train

Hi, I’m trying to train a binary classifier where 64x64x3 images are as inputs. But the model only outputs close to 0. I also, found that the parameters before and after training differ, that is, gradient updates anyway.

Is there anything else I should try?

def fn_fit(self, optimizer, loss_fn, batch_size=16, epochs=10, lr=0.001):
        train_load = DataLoader(self.train, batch_size=batch_size, shuffle=True)
        
        total_step = len(train_load)
        for epoch in range(epochs):
            for step, (x_batch, y_batch) in enumerate(train_load):
                #Initializing
                self.model.train()
                optimizer.zero_grad()

                #Loading GPU Memory
                x_batch, y_batch = x_batch.to(device), y_batch.to(device)

                #Feed Forward
                pred = self.model(x_batch)
                loss = loss_fn(pred, y_batch)

                #Feed Backward
                loss.backward()        

                #Update Parameters
                optimizer.step()
                
                if step%(total_step//10)==0 or step==total_step:
                    self.train_loss.append(loss.item())
                    print(f'Epoch:{epoch+1:2d}/{epochs}, Step:{step:3d}/{total_step}, Training Loss:{loss.item():.4f}')

Try to overfit a small dataset, e.g. just 10 samples, by playing around with the hyperparameters.
Once your model is able to do so scale up the use case again.

I resized the input of images, and turns out the model is finally learning. Thank you!