Loss is well calculated, but all of layer grad.data is zero

Hello, everybody.

I have a problem I can’t understand.
loss is well calculated along with multiple layers. But after loss.backward() , all of grad.data is zero like below.

gradient of one of of my layers.
> module.module.filterDTW.proj.bias grad tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
> 0., 0., 0., 0., 0., 0., 0., 0.], device=‘cuda:0’)
from this script.
for i, (name, param) in enumerate(model.named_parameters()):
if param.grad is None:
print(name, "is None, requires_grad : ", param.requires_grad)
else:
print (name, “grad”, param.grad.data)

I’m really confused which part i have to fix
I think 1) Model output is ok 2) Initialize model weight is ok …etc

Could you share your idea for this problem?

I know that these are silly options, but it’s better to start from something obvious than delving into more complicated problems from the beginning:

  • zero_grad applied before printing?
  • value of the loss is zero?

@ParGG Thanks your reply. optimizer.zero_grad() is ok before printing. value of loss is not zero.

If you do:

optimizer.zero_grad()

for i, (name, param) in enumerate(model.named_parameters()):
    if param.grad is None:
        print(name, "is None, requires_grad : ", param.requires_grad)
    else:
        print (name, “grad”, param.grad.data)

You will get the gradients equal to 0. To see the gradient values you should print them right after loss.backard().

@ParGG
My simple script like this. I already put gradient check point after loss.backward().

optimizer.zero_grad()
shot_repr = model(input, masking_mask) 
loss = criterion(shot_repr)
loss.backward()
for i, (name, param) in enumerate(model.named_parameters()):
     if param.grad is None:
           print("engine.py 115 ", name, "is None, requires_grad : ", param.requires_grad)
     else:
           print ("engine.py 117",  name, "grad", param.grad.data)       
optimizer.step()

Are there another possibility to get all of gradient is zero ?

And another question is memory release after loss.backward().
I check memory allocation with print(torch.cuda.memory_allocated()) like below.

optimizer.zero_grad()
print("test 1", torch.cuda.memory_allocated())
shot_repr = model(input, masking_mask) 
loss = criterion(shot_repr)
print("test 2", torch.cuda.memory_allocated())
loss.backward()
print("test 3", torch.cuda.memory_allocated())
for i, (name, param) in enumerate(model.named_parameters()):
     if param.grad is None:
           print("engine.py 115 ", name, "is None, requires_grad : ", param.requires_grad)
     else:
           print ("engine.py 117",  name, "grad", param.grad.data)       
optimizer.step()
print("test 3", torch.cuda.memory_allocated())

>> test1 850558464
>> test2 7384698880
>> test3 872312320
>> test4 758866944

Is it right? memory release after loss.backward()? If wrong , what is the right memory release sequence?
I think that grad 0 problem relate to memory release.

Here is a script I took from a PyTorch tutorial and adapted it to print the gradient.

import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor, Lambda

training_data = datasets.FashionMNIST(
    root="data",
    train=True,
    download=True,
    transform=ToTensor()
)

test_data = datasets.FashionMNIST(
    root="data",
    train=False,
    download=True,
    transform=ToTensor()
)

train_dataloader = DataLoader(training_data, batch_size=64)
test_dataloader = DataLoader(test_data, batch_size=64)

class NeuralNetwork(nn.Module):
    def __init__(self):
        super(NeuralNetwork, self).__init__()
        self.flatten = nn.Flatten()
        self.linear_relu_stack = nn.Sequential(
            nn.Linear(28*28, 512),
            nn.ReLU(),
            nn.Linear(512, 512),
            nn.ReLU(),
            nn.Linear(512, 10),
        )

    def forward(self, x):
        x = self.flatten(x)
        return self.linear_relu_stack(x)

def train_loop(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)
    for batch, (X, y) in enumerate(dataloader):
        # Compute prediction and loss
        pred = model(X.to("cuda"))
        loss = loss_fn(pred, y.to("cuda"))

        # Backpropagation
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        if batch % 100 == 0:
            loss, current = loss.item(), batch * len(X)
            print(f"loss: {loss:>7f}  [{current:>5d}/{size:>5d}]")

    print_grad()

def print_grad():
    print("\nGrad:")
    for name, param in model.named_parameters():
        if param.grad is None:
            print(f"{name:<27}", "is None, requires_grad : ", param.requires_grad)
        else:
            print (f"{name:<27}", "grad", param.grad.data.max(), param.grad.data.min())


def test_loop(dataloader, model, loss_fn):
    size = len(dataloader.dataset)
    num_batches = len(dataloader)
    test_loss, correct = 0, 0

    with torch.no_grad():
        for X, y in dataloader:
            pred = model(X.to("cuda"))
            test_loss += loss_fn(pred, y.to("cuda")).to("cpu").item()
            correct += (pred.argmax(1) == y.to("cuda")).to("cpu").type(torch.float).sum().item()

    test_loss /= num_batches
    correct /= size
    print(f"\nTest Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")

model = NeuralNetwork().to("cuda")

learning_rate = 1e-3
batch_size = 64
epochs = 5

# Initialize the loss function
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)

epochs = 2
for t in range(epochs):
    print(f"Epoch {t+1}\n-------------------------------")
    train_loop(train_dataloader, model, loss_fn, optimizer)
    test_loop(test_dataloader, model, loss_fn)
print("Done!")

Could you let me know if when replacing the model, dataset, optimiser and loss with yours, you still get the same error?

@ParGG
Thank you for your reply. I recognize which line is wrong in my code. Because of allocation of memory of cuda. Not torch problem. Sorry for bothering you. Thank you once more.