Constant loss =-log(1/10)

Loss is always a constant. Not sure why the loss does not decrease. Its always equal to the cross entropy loss for random labels for 10 classes of CIFAR-10 . Loss is constant at each epoch and is equal to -log(1/10) = 2.30

I suspect if the backprop is working or not. Not sure how to debug it

def initialize(wfp):
  wtilde=wfp/torch.std(wfp)
  sigma_a=0.95-((0.95-0.05)*torch.abs(wtilde))
  sigma_b=0.5*(1+(wfp/(1-sigma_a)))
  sigma_a=torch.clamp(sigma_a,0.05,0.95)
  sigma_b=torch.clamp(sigma_b,0.05,0.95)
  a=torch.log(sigma_a/(1-sigma_a)).requires_grad_().cuda()
  b=torch.log(sigma_b/(1-sigma_b)).requires_grad_().cuda()
  
  return a,b

w1fpconv=convlayer1param()
w2fpconv=convlayer2param()
w3fpconv=convlayer3param()
w4fpconv=convlayer4param()
w5fpconv=convlayer5param()
w6fpconv=convlayer6param()
wfp1=model['layer4.1.weight']
wfp2=model['layer4.4.weight']
al1,bl1=initialize(w1fpconv)
al2,bl2=initialize(w2fpconv)
al3,bl3=initialize(w3fpconv)
al4,bl4=initialize(w4fpconv)
al5,bl5=initialize(w5fpconv)
al6,bl6=initialize(w6fpconv)
a1,b1=initialize(wfp1)
a2,b2=initialize(wfp2)

al1=torch.nn.Parameter(al1)
bl1=torch.nn.Parameter(bl1)
al2=torch.nn.Parameter(al2)
bl2=torch.nn.Parameter(bl2)
al3=torch.nn.Parameter(al3)
bl3=torch.nn.Parameter(bl3)
al4=torch.nn.Parameter(al4)
bl4=torch.nn.Parameter(bl4)
al5=torch.nn.Parameter(al5)
bl5=torch.nn.Parameter(bl5)
al6=torch.nn.Parameter(al6)
bl6=torch.nn.Parameter(bl6)
a1=torch.nn.Parameter(a1)
b1=torch.nn.Parameter(b1)
a2=torch.nn.Parameter(a2)
b2=torch.nn.Parameter(b2)


betaparam=1e-11
lossfunc=torch.nn.CrossEntropyLoss().to(device)

lr=0.01
optimizer=torch.optim.Adam([al1,bl1,al2,bl2,al3,bl3,al4,bl4,al5,bl5,al6,bl6,a1,b1,a2,b2],lr,weight_decay=5e-4)

num_epochs=10

for epoch in range(num_epochs):
  for i,(images,labels) in enumerate(train_loader):
    images=images.to(device)
    labels=labels.to(device)
    y1=reparamcnn1(al1,bl1,images)
    y2=reparamcnn2(al2,bl2,y1)
    y3=reparamcnn3(al3,bl3,y2)
    y4=reparamcnn4(al4,bl4,y3)
    y5=reparamcnn5(al5,bl5,y4)
    y6=reparamcnn6(al6,bl6,y5)
    y6=y6.reshape(y6.size(0),-1)
    y6=torch.t(y6)
    y7=F.dropout(y6)
    y8=reparamfc(a1,b1,y7)
    y9=F.relu(y8)
    y10=F.dropout(y9)
    yout=reparamfc(a2,b2,y10)
    yout=torch.t(yout)
    #yout=F.softmax(yout,dim=1)
    l2=al1.norm(2)+bl1.norm(2)+al2.norm(2)+bl2.norm(2)+al3.norm(2)+bl3.norm(2)+al4.norm(2)+bl4.norm(2)+al5.norm(2)+bl5.norm(2)+al6.norm(2)+bl6.norm(2)+a1.norm(2)+b1.norm(2)+a2.norm(2)+b2.norm(2)
    lossi=lossfunc(yout,labels)+(betaparam*l2)
    if(epoch==170):
      lr=0.001
      for param_group in optimizer.param_groups:
        param_group['lr']=lr  
    lossi.backward()
    optimizer.step()
    optimizer.zero_grad()
  print('epoch {}'.format(epoch),'loss = {}'.format(lossi.item()))

The result is this

epoch 0 loss = 2.305433988571167
epoch 1 loss = 2.3047266006469727
epoch 2 loss = 2.2993619441986084
epoch 3 loss = 2.305569887161255
epoch 4 loss = 2.303546667098999
epoch 5 loss = 2.2977681159973145
epoch 6 loss = 2.2988994121551514
epoch 7 loss = 2.305543899536133
epoch 8 loss = 2.304884672164917
epoch 9 loss = 2.3079733848571777
epoch 10 loss = 2.2997756004333496
epoch 11 loss = 2.2982029914855957
epoch 12 loss = 2.3063526153564453
epoch 13 loss = 2.3051438331604004
epoch 14 loss = 2.299895763397217
epoch 15 loss = 2.2976086139678955
epoch 16 loss = 2.303872585296631
epoch 17 loss = 2.304962635040283
epoch 18 loss = 2.292499303817749
epoch 19 loss = 2.3069281578063965
epoch 20 loss = 2.3034133911132812
epoch 21 loss = 2.3061203956604004
epoch 22 loss = 2.3057847023010254
epoch 23 loss = 2.3092713356018066
epoch 24 loss = 2.3067853450775146
epoch 25 loss = 2.3024075031280518
epoch 26 loss = 2.306104898452759
epoch 27 loss = 2.3030776977539062
epoch 28 loss = 2.302023410797119
epoch 29 loss = 2.304934024810791
epoch 30 loss = 2.3043360710144043
epoch 31 loss = 2.303095579147339
epoch 32 loss = 2.304739475250244
epoch 33 loss = 2.305116891860962
epoch 34 loss = 2.305945873260498

the reparamcnn functions basically do the forward propagation across the layers of the neural network

Since I cannot run your code, my best guess would be to try to lower the learning rate and play around a bit with other hyperparameters as well.

PS: I’ve formatted your code for better readability. You can add code snippets using three backticks ``` :wink:

@ptrblck I have my code below. You would just need the CIFAR-10 dataset . To run this I would have to send a .pth file but unable to send the file herein. How do I send it?

model=torch.load('/content/cifar_fullprecison_vgg19_shayer_50.pth')
class Ternary_batch_rel(torch.nn.Module):
  def __init__(self,batchnorm_size):
    super(Ternary_batch_rel,self).__init__()
    self.l1=torch.nn.Sequential(
    torch.nn.ReLU(),
    torch.nn.BatchNorm2d(batchnorm_size)
    )
  
  def forward(self,x):
    out=self.l1(x)
    return out
    
z1=Ternary_batch_rel(128).to(device)
z2=Ternary_batch_rel(256).to(device)
z3=Ternary_batch_rel(512).to(device)
​
class Ternary_max_pool(torch.nn.Module):
  def __init__(self):
    
    super(Ternary_max_pool,self).__init__()
    self.l1=torch.nn.Sequential(
    torch.nn.MaxPool2d(kernel_size=2,stride=2))
      
  def forward(self,x):
    out=self.l1(x)
    return out
​
zm=Ternary_max_pool().to(device)
sigm=torch.nn.Sigmoid().to(device)
def convlayer1param():
  s1=torch.zeros([128,3,3,3])
  s1=model['layer1.0.weight']
  return s1
 
def convlayer2param():
  s1=torch.zeros([128,128,3,3])
  s1=model['layer1.3.weight']
  return s1

def convlayer3param():
  s1=torch.zeros([256,128,3,3])
  s1=model['layer2.0.weight']
  return s1
def convlayer4param():
  s1=torch.zeros([256,256,3,3])
  s1=model['layer2.3.weight']
  return s1

def convlayer5param():
  s1=torch.zeros([512,256,3,3])
  s1=model['layer3.0.weight']
  return s1

def convlayer6param():
  s1=torch.zeros([512,512,3,3])
  s1=model['layer3.3.weight']
  return s1
def reparamcnn1(a,b,h):
  weight_m= (2*sigm(b)-(2*sigm(a)*sigm(b))-1+sigm(a))
  weight_v=(1-sigm(a))-weight_m**2
  om=F.conv2d(h,weight_m,padding=1)
  ov=F.conv2d(h**2,weight_v,padding=1)
  e=torch.randn(ov.shape).cuda()
  z=om+ov*e
  return z1(z)

def reparamcnn2(a,b,h):
  weight_m=(2*sigm(b)-(2*sigm(a)*sigm(b))-1+sigm(a))
  weight_v=(1-sigm(a))-weight_m**2
  om=F.conv2d(h,weight_m,padding=1)
  ov=F.conv2d(h**2,weight_v,padding=1)
  e=torch.randn(ov.shape).cuda()
  #e=torch.randn(1).cuda()
  z=om+ov*e
  op=z1(z)
  return zm(op)

def reparamcnn3(a,b,h):
  weight_m= (2*sigm(b)-(2*sigm(a)*sigm(b))-1+sigm(a))
  weight_v=(1-sigm(a))-weight_m**2
  om=F.conv2d(h,weight_m,padding=1)
  ov=F.conv2d(h**2,weight_v,padding=1)
  e=torch.randn(ov.shape).cuda()
  #e=torch.randn(1).cuda()
  z=om+ov*e
  return z2(z)
  
def reparamcnn4(a,b,h):
  weight_m=(2*sigm(b)-(2*sigm(a)*sigm(b))-1+sigm(a))
  weight_v=(1-sigm(a))-weight_m**2
  om=F.conv2d(h,weight_m,padding=1)
  ov=F.conv2d(h**2,weight_v,padding=1)
  e=torch.randn(ov.shape).cuda()
  #e=torch.randn(1).cuda()
  z=om+ov*e
  op=z2(z)
  return zm(op)
 
def reparamcnn5(a,b,h):
  weight_m= (2*sigm(b)-(2*sigm(a)*sigm(b))-1+sigm(a))
  weight_v=(1-sigm(a))-weight_m**2
  
  om=F.conv2d(h,weight_m,padding=1)
  ov=F.conv2d(h**2,weight_v,padding=1)
  e=torch.randn(ov.shape).cuda()
  #e=torch.randn(1).cuda()
  z=om+ov*e

  return z3(z)

def reparamcnn6(a,b,h):
  weight_m=(2*sigm(b)-(2*sigm(a)*sigm(b))-1+sigm(a))
  weight_v=(1-sigm(a))-weight_m**2
  om=F.conv2d(h,weight_m,padding=1)
  ov=F.conv2d(h**2,weight_v,padding=1)
  e=torch.randn(ov.shape).cuda()
  #e=torch.randn(1).cuda()
  z=om+ov*e
  op=z3(z)
  return zm(op)

def reparamfc(a,b,h):
  weight_m=(2*sigm(b)-(2*sigm(a)*sigm(b))-1+sigm(a))
  weight_v=(1-sigm(a))-weight_m**2
  om=torch.matmul(weight_m,h)
  ov=torch.matmul(weight_v,h**2)
  e=torch.randn(ov.shape).cuda()
 # e=torch.randn(1).cuda()
  z=om+ov*e
  return z

def initialize(wfp):
  wtilde=wfp/torch.std(wfp)
  sigma_a=0.95-((0.95-0.05)*torch.abs(wtilde))
  sigma_b=0.5*(1+(wfp/(1-sigma_a)))
  sigma_a=torch.clamp(sigma_a,0.05,0.95)
  sigma_b=torch.clamp(sigma_b,0.05,0.95)
  a=torch.log(sigma_a/(1-sigma_a)).requires_grad_().cuda()
  b=torch.log(sigma_b/(1-sigma_b)).requires_grad_().cuda()
  
  return a,b

w1fpconv=convlayer1param()
w2fpconv=convlayer2param()
w3fpconv=convlayer3param()
w4fpconv=convlayer4param()
w5fpconv=convlayer5param()
w6fpconv=convlayer6param()
wfp1=model['layer4.1.weight']
wfp2=model['layer4.4.weight']
al1,bl1=initialize(w1fpconv)
al2,bl2=initialize(w2fpconv)
al3,bl3=initialize(w3fpconv)
al4,bl4=initialize(w4fpconv)
al5,bl5=initialize(w5fpconv)
al6,bl6=initialize(w6fpconv)
a1,b1=initialize(wfp1)
a2,b2=initialize(wfp2)

al1=torch.nn.Parameter(al1)
bl1=torch.nn.Parameter(bl1)
al2=torch.nn.Parameter(al2)
bl2=torch.nn.Parameter(bl2)
al3=torch.nn.Parameter(al3)
bl3=torch.nn.Parameter(bl3)
al4=torch.nn.Parameter(al4)
bl4=torch.nn.Parameter(bl4)
al5=torch.nn.Parameter(al5)
bl5=torch.nn.Parameter(bl5)
al6=torch.nn.Parameter(al6)
bl6=torch.nn.Parameter(bl6)
a1=torch.nn.Parameter(a1)
b1=torch.nn.Parameter(b1)
a2=torch.nn.Parameter(a2)
b2=torch.nn.Parameter(b2)


betaparam=1e-11
lossfunc=torch.nn.CrossEntropyLoss().to(device)

lr=0.01
optimizer=torch.optim.Adam([al1,bl1,al2,bl2,al3,bl3,al4,bl4,al5,bl5,al6,bl6,a1,b1,a2,b2],lr,weight_decay=1e-4)

num_epochs=10

for epoch in range(num_epochs):
  for i,(images,labels) in enumerate(train_loader):
    images=images.to(device)
    labels=labels.to(device)
    y1=reparamcnn1(al1,bl1,images)
    y2=reparamcnn2(al2,bl2,y1)
    y3=reparamcnn3(al3,bl3,y2)
    y4=reparamcnn4(al4,bl4,y3)
    y5=reparamcnn5(al5,bl5,y4)
    y6=reparamcnn6(al6,bl6,y5)
    y6=y6.reshape(y6.size(0),-1)
    y6=torch.t(y6)
    y7=F.dropout(y6)
    y8=reparamfc(a1,b1,y7)
    y9=F.relu(y8)
    y10=F.dropout(y9)
    yout=reparamfc(a2,b2,y10)
    yout=torch.t(yout)
    #yout=F.softmax(yout,dim=1)
    l2=al1.norm(2)+bl1.norm(2)+al2.norm(2)+bl2.norm(2)+al3.norm(2)+bl3.norm(2)+al4.norm(2)+bl4.norm(2)+al5.norm(2)+bl5.norm(2)+al6.norm(2)+bl6.norm(2)+a1.norm(2)+b1.norm(2)+a2.norm(2)+b2.norm(2)
    lossi=lossfunc(yout,labels)+(betaparam*l2)
    if(epoch==170):
      lr=0.001
      for param_group in optimizer.param_groups:
        param_group['lr']=lr  
    lossi.backward()
    optimizer.step()
    optimizer.zero_grad()
  print('epoch {}'.format(epoch),'loss = {}'.format(lossi.item()))

Are you using a standard vgg19 from torchvision?
If so, I could just use the pretrained weights or train from scratch so that you wouldn’t have to upload the data.

@ptrblck, used the following architecture For my case, I needed to train the network with no bias applied on the layers. Hence, I could not use the pretrained weights I just need to train the weights.

  1. Conv2d(output dim=128,input dim=3,kernel_size=3,padding=1,bias=False)

2.batchnorm(128)

3 relu

4.Conv2d(output dim=128,input dim=128,kernel_size=3,padding=1,bias=False)

  1. Batchnorm(128)

  2. Relu

7.Maxpool(kernel_size=2,stride=2)

  1. Conv2d(output dim=256,input dim=128,kernel_size=3,padding=1,bias=False)

9.batchnorm(256)

10 relu

11 conv2d(output dim=256,input dim=256,kernel_size=3,padding=1,bias=False)

  1. batchnorm(256)

13 relu

14 Maxpool(kernel_size=2,stride=2)

15 conv2d(output dim=512,input dim=256,kernel_size=3,padding=1,bias=False)

16 batch norm(512)

17 relu

18 conv2d(output dim=512,input dim=512,kernel_size=3,padding=1,bias=False)

19 batchnorm(512)

20 relu

21 maxpool

22 dropout(0.5)

  1. fully connected layer1(ouput dim=1024,input dim=8192,bias=False)

  2. relu

  3. dropout(0.5)

26 softmax(output dim=10,input dim=1024,bias=False)

This setup gave me a 92.2 percent on the validation set. I ran it for 300 epochs using SGD with learning rate 0.05 which reduces by half the previous value every 30 epochs, momentum=0.9,weight_decay=5e-4, and batch size=50

@ptrblck any leads on this?

@ptrblck, I ran the code many times and changed my batchsize, learning rates, activation functions, weight decay, l2 reg parameter but the problem lies in the fact that, the gradients are very small and the updates are not significant due to which the loss is not decreasing and hence no improvement in accuracy as a result of underfitting

I have also appropriately initialized it and have used RELU to prevent the vanishing gradient problem. I also used paremetrized RELU or leaky RELU to make sure ‘dying’ relu problem is not encountered.

I also have a question. should the labels given to the nn.CrossEntropyLoss be one hot encoded? I dont think thats a necessity.

Any lead ?
same thing with me ,my loss remains constant at 2.30 .
How did you solve it ?

Any lead ?
same problem with me ,loss remains constant at 2.30
I also goolged alot and found that crossentropyLoss already has softmax layer ,also tried it but nothing worked
How did you solve it ?