Tensor.grad is None

Thanks.I have a question.
I’m writing a loss function.

def forward(self, input, target):
        y = one_hot(target, input.size(-1))
        Psoft = torch.nn.functional.softmax(input).cpu()   
        Loss=0
        t1=target.view(1,target.size(0)).cpu()
        for i in range(0,target.size(0)-1):
            t2=t1[0,i]
            flag=int(t2.item())
            for j in range(1,flag+2):
                P1=Psoft[i,:j]
                y1=y[i,:j]
                Loss=(P1-y1).sum().pow(2).sum()
                #Loss+=(sum(P1-y1))**2 
            if int(t2.item())!=7:
                for k in range(flag+1,9):
                    P2=Psoft[i,flag+1:8]
                    y2=y[i,flag+1:8]
                    Loss=(P2-y2).sum().pow(2).sum()
                    #Loss+=(sum(P2-y2))**2 
        Loss=Loss/target.size(0)
        print(Loss.grad)
        return Loss 

it’s written on Pytorch 0.4
target is a tensor of 641 Psoft is a tensor of 648 i found that Loss.grad is none how to get grads? Thanks a lot!

It’s okay that loss.grad is None since the gradients are calculated during the backward call and you don’t call backward anywhere.

pytorch has autograd mechanism,how to use this?

A small example would be

loss = criterion(predictions, target) 
optim.zero_grad()
loss.backward() # this actually calculates the gradients
optim.step()

Where criterion would be an instance of your custom loss-module, predictions and targets would be tensors holding the corresponding values (and a gradient-path for predictions, which is automatically tracked by autograd) and optim an instance of an optimizer-class of your choice.