Loss function stuck and all output values are same

Neural net

class NeuralNet(nn.Module):

  def __init__(self,girdi_boyutu,cikti_boyutu):

    super(NeuralNet, self).__init__()

    self.boyut=girdi_boyutu

    self.katman1=nn.Linear(girdi_boyutu,self.boyut)

    self.katman2=nn.Linear(self.boyut,self.boyut)

    self.katman3=nn.Linear(self.boyut,self.boyut)

    self.katman4=nn.Linear(self.boyut,self.boyut)

    self.katman5=nn.Linear(self.boyut,self.boyut)

    self.katman6=nn.Linear(self.boyut,self.boyut)

    self.katman7=nn.Linear(self.boyut,self.boyut)

    self.katman8=nn.Linear(self.boyut,self.boyut)

    self.katman9=nn.Linear(self.boyut,self.boyut)

    self.katman10=nn.Linear(self.boyut,self.boyut)

    self.ciktikatman=nn.Linear(self.boyut,cikti_boyutu)

      

    self.prelu=nn.RReLU()

    self.tanh=nn.Tanh()

    self.sigmoid=nn.Sigmoid()

    

  def forward(self,x):

    out = self.katman1(x)

    out=self.prelu(out)

    out = self.katman2(out)

    out=self.prelu(out)

    out = self.katman3(out)

    out=self.prelu(out)

    out = self.katman4(out)

    out=self.prelu(out)

    out = self.katman5(out)

    out=self.prelu(out)

    out = self.katman6(out)

    out=self.prelu(out)

    out = self.katman7(out)

    oout=self.prelu(out)

    out = self.katman8(out)

    out=self.prelu(out)

    out = self.katman9(out)

    out=self.prelu(out)

    out = self.katman10(out)

    out=self.prelu(out)

    out=self.ciktikatman(out)

    return out

train code

n_epoc=400

bolunecek=1

losyaz=1

gciz=5

batch_size=math.ceil(hdata.x.shape[0]/bolunecek)

dataload=DataLoader(hdata,batch_size,shuffle="false")

optimizer=torch.optim.Adam(model.parameters(),lr=1e-5)

ciran=nn.MSELoss()

losgrap=[]

ia=[]

isay=0

for epoch in tqdm(range(n_epoc)):

  for  i, (x,y) in enumerate(dataload):

    optimizer.zero_grad()

    output=model(x)

    loss=ciran(output,y)

    loss.backward()

    optimizer.step()

  if isay%losyaz==0:

    print(loss)

  losgrap.append(loss)

  isay+=1

  ia.append(isay)

  if isay%gciz==0:

    plt.plot(ia,losgrap)

    plt.pause(0.001)

Always get the same output no matter the training data and the loss value gets stuck. My training data that used to work is not working now and I am facing the same issue. I try everyhing. What is causing the problem?

Try to overfit a small dataset (e.g. just 10 samples) by playing around with the model architecture as well as the hyperparameters first. Your current model contains a lot of stacked linear layers and I don’t know if you’ve already verified that the model is not suffering from e.g. vanishing gradients etc.

Thank you for responding. I tried changing the hyperparameters. I noticed that even though I changed the learning rate value by a large amount, there is no significant change in the loss value( like 1e-5 to 1e-20). The loss value continues to oscillate as usual (loss value oscillates between 6.0 and 5.0) the outputs produced also remain constant.
I shortened and simplifield my network model and changed my activation function (leakyRelu, sigmoid etc) but the result did not change. The network and this dataset, which used to work without any problems in past, now it doesn’t work even though there is no change in the codes. The system, which was working smoothly about 2 months ago, started to stop working even though there was no change.
I’m new to pytorch, can you tell me how to follow, I can’t find a solution. thank you !