Loss and model parameters requires_grad= False, and grad=non

This is the definition of my model:

class MyClassifier(nn.Module):
    def __init__(self,args):
        super(MyClassifier, self).__init__()
        self.avgpool = nn.AdaptiveAvgPool2d(output_size=(1, 1))
        self.fc1= nn.Linear(in_features= args.input_features, out_features=args.hidden_size, bias=True)
        if  args.classifier_act == 'Tanh':
            self.act = nn.Tanh()
        else:
            self.act = nn.Sigmoid()
        self.fc2 = nn.Linear(args.hidden_size,args.num_classes)
    def forward(self, input):
        x= self.avgpool(input)
        x = x.view(x.size(0), -1)
 x= self.act(self.fc1(x))
        x= self.fc2(x)
return x


        for batch, (lr, hr,rcan,hran,srfbn,csnln,gmfn,drln,rnan,mardn,label,filename,idx_scale) in enumerate(self.loader_train):
            lr= self.prepare([lr])
            self.optimizer.zero_grad()
            self.scores = self.model(input, idx_scale)
            self.scores = torch.sigmoid(self.scores)
            loss = self.loss(self.scores , label)
           ` loss = Variable(loss, requires_grad=True)`
			loss.backward()
            self.optimizer.step()
            print(loss.grad)
			for name, param in self.model.named_parameters():
                    if param.grad is not None:
                        print(name, param.grad.sum())
                    else:
                        print(name, param.grad)

The loss and output of the model required_grad are false.
scores required true: False
loss required true: False

I put the loss in Variable as follow:
loss = Variable(loss, requires_grad=True)
But still, the grad is non and my model is not training.
tensor(1., device=‘cuda:0’)
model.fc1.weight None
model.fc1.bias None
model.fc2.weight None
model.fc2.bias None
I have no idea why this happens.

Hi Ngh!

It’s hard to tell what is going on here.

You haven’t posted all of your code, and the code you posted is
garbled by the forum.

Please post a trimmed-down, complete, runnable script that illustrates
your issue, and please enclose your code in a triple-backtick (```) code
block so that the forum will format it nicely.

Having said that, here is a simplified version of your script that shows
a non-trivial weight and bias for your Linear layers and an output
and loss for which requires_grad = True. Maybe something in it
will help you find your problem.

The script:

import torch
import torch.nn as nn

print ('torch.__version__ =', torch.__version__)

class SomeArgs:
    pass

args = SomeArgs()
args.input_features = 3
args.hidden_size = 5
args.classifier_act = None
args.num_classes = 4

class MyClassifier(nn.Module):
    def __init__ (self, args):
        super (MyClassifier, self).__init__()
        self.fc1= nn.Linear(in_features= args.input_features, out_features=args.hidden_size, bias=True)
        if args.classifier_act == 'Tanh':
            self.act = nn.Tanh()
        else:
            self.act = nn.Sigmoid()
        self.fc2 = nn.Linear(args.hidden_size,args.num_classes)
    def forward(self, input):
        x = input
        x= self.act(self.fc1(x))
        x= self.fc2(x)
        return x

model = MyClassifier (args)

print ('model.fc1 = ...\n', model.fc1)
print ('model.fc1.weight = ...\n', model.fc1.weight)
print ('model.fc1.bias = ...\n', model.fc1.bias)
input = torch.randn ((1, 3, 3))
output = model (input)
print ('output.requires_grad =', output.requires_grad)
loss = output.sum()
print ('loss.requires_grad =', loss.requires_grad)

And its output:

>>> exec (open ('./ngh.py').read())
torch.__version__ = 1.6.0
model.fc1 = ...
 Linear(in_features=3, out_features=5, bias=True)
model.fc1.weight = ...
 Parameter containing:
tensor([[ 0.4360, -0.3469,  0.2549],
        [-0.4405, -0.0466, -0.3481],
        [-0.3794,  0.1865, -0.3516],
        [-0.4399, -0.3565,  0.0524],
        [-0.4555, -0.1364, -0.1984]], requires_grad=True)
model.fc1.bias = ...
 Parameter containing:
tensor([ 0.4207,  0.4298,  0.0848, -0.0472,  0.2806], requires_grad=True)
output.requires_grad = True
loss.requires_grad = True

Good luck.

K. Frank

Hi Frank,

Thanks for your help.