How to do only one forward propagation per epoch and multiple backward propagations on graph

How to do only one forward propagation per epoch and multiple back propagations?
I want to get the vector representation through several layers of graph neural networks first,
then do the backward operation to optimize the loss according to the batch number.

what i want to do is in code B.
the main difference is the position of ’ user_embedding, item_embedding = model(G)’
but an error will occur in code B
: TypeError: cannot unpack non-iterable NoneType object

is there any way to do this?
the same error:link

thank you!

#code A

model = Model(G, 8, 8, 8)  
opt = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)

for epoch in range(2):
    for batch_data  in dataset:
        optimizer.zero_grad()
        user_embedding, item_embedding = model(G)
        loss=model.getloss(batch_data,user_embedding, item_embedding)
        loss.backward()
        optimizer.step()-----------

#code B

model = Model(G, 8, 8, 8)  
opt = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)
user_embedding, item_embedding = model(G)

for epoch in range(2):
    for batch_data  in dataset:
        optimizer.zero_grad()
        loss=model.getloss(batch_data,user_embedding, item_embedding)
        loss.backward()
        optimizer.step()

You just need to call loss.backward(retain_graph=True)
This will allow you to backprop several times.

thank you for your reply
the error still occurs.

Traceback (most recent call last):
File “/Users/cc/Downloads/dgl-examples-pytorch/code1211/hgnn/hgnn.py”, line 778, in
main()
File “/Users/cc/Downloads/dgl-examples-pytorch/code1211/hgnn/hgnn.py”, line 699, in main
loss.backward(retain_graph=True)
File “/Users/cc/PycharmProjects/testfolder1127/venvpy3/lib/python3.7/site-packages/torch/tensor.py”, line 166, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “/Users/cc/PycharmProjects/testfolder1127/venvpy3/lib/python3.7/site-packages/torch/autograd/ init .py”, line 99, in backward
allow_unreachable=True) # allow_unreachable flag
File “/Users/cc/PycharmProjects/testfolder1127/venvpy3/lib/python3.7/site-packages/torch/autograd/function.py”, line 77, in apply
return self._forward_cls.backward(self, *args)
File “/Users/cc/PycharmProjects/testfolder1127/venvpy3/lib/python3.7/site-packages/dgl/backend/pytorch/tensor.py”, line 336, in backward
= ctx.backward_cache
TypeError: cannot unpack non-iterable NoneType object

here is the code:

Hi, I think you have any other issue.

import torch

model = torch.nn.Conv2d(1, 2, 3)
opt = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)
inp = torch.rand(1, 1, 10, 10)
out = model(inp)
scalar = out.mean()

for _ in range(10):
    print(_)
    opt.zero_grad()
    # with torch.no_grad():
    gt = torch.rand_like(out)
    loss = (scalar * gt).mean() #just  a dummy loss
    loss.backward(retain_graph=True)
    print(list(model.parameters())[0].grad)
    opt.step()

This minimal example emulates what you want to do in code B. It’s runable and

0
tensor([[[[0.1069, 0.1032, 0.1004],
          [0.1086, 0.1056, 0.1034],
          [0.1055, 0.1049, 0.1043]]],


        [[[0.1069, 0.1032, 0.1004],
          [0.1086, 0.1056, 0.1034],
          [0.1055, 0.1049, 0.1043]]]])
1
tensor([[[[0.1168, 0.1128, 0.1097],
          [0.1187, 0.1154, 0.1129],
          [0.1152, 0.1146, 0.1139]]],


        [[[0.1168, 0.1128, 0.1097],
          [0.1187, 0.1154, 0.1129],
          [0.1152, 0.1146, 0.1139]]]])
2
tensor([[[[0.1250, 0.1206, 0.1174],
          [0.1270, 0.1234, 0.1208],
          [0.1233, 0.1226, 0.1219]]],


        [[[0.1250, 0.1206, 0.1174],
          [0.1270, 0.1234, 0.1208],
          [0.1233, 0.1226, 0.1219]]]])
3
tensor([[[[0.1225, 0.1182, 0.1150],
          [0.1244, 0.1210, 0.1184],
          [0.1208, 0.1201, 0.1194]]],


        [[[0.1225, 0.1182, 0.1150],
          [0.1244, 0.1210, 0.1184],
          [0.1208, 0.1201, 0.1194]]]])
4
tensor([[[[0.1216, 0.1173, 0.1142],
          [0.1235, 0.1201, 0.1175],
          [0.1199, 0.1192, 0.1185]]],


        [[[0.1216, 0.1173, 0.1142],
          [0.1235, 0.1201, 0.1175],
          [0.1199, 0.1192, 0.1185]]]])
5
tensor([[[[0.1166, 0.1125, 0.1095],
          [0.1184, 0.1151, 0.1127],
          [0.1150, 0.1144, 0.1137]]],


        [[[0.1166, 0.1125, 0.1095],
          [0.1184, 0.1151, 0.1127],
          [0.1150, 0.1144, 0.1137]]]])
6
tensor([[[[0.1281, 0.1236, 0.1203],
          [0.1301, 0.1265, 0.1238],
          [0.1264, 0.1256, 0.1249]]],


        [[[0.1281, 0.1236, 0.1203],
          [0.1301, 0.1265, 0.1238],
          [0.1264, 0.1256, 0.1249]]]])
7
tensor([[[[0.1114, 0.1075, 0.1046],
          [0.1131, 0.1100, 0.1077],
          [0.1099, 0.1092, 0.1086]]],


        [[[0.1114, 0.1075, 0.1046],
          [0.1131, 0.1100, 0.1077],
          [0.1099, 0.1092, 0.1086]]]])
8
tensor([[[[0.1179, 0.1138, 0.1107],
          [0.1197, 0.1164, 0.1139],
          [0.1163, 0.1156, 0.1149]]],


        [[[0.1179, 0.1138, 0.1107],
          [0.1197, 0.1164, 0.1139],
          [0.1163, 0.1156, 0.1149]]]])
9
tensor([[[[0.1183, 0.1141, 0.1111],
          [0.1202, 0.1168, 0.1143],
          [0.1167, 0.1160, 0.1153]]],


        [[[0.1183, 0.1141, 0.1111],
          [0.1202, 0.1168, 0.1143],
          [0.1167, 0.1160, 0.1153]]]])

gradients properly flow
Your error seems to be related to None. Is the graph ok? Have you tried at least one run checking gradients are not None?

thank you. i will check it.
the first batch can get the right loss.
but the error occurs before the second loss.backward(retain_graph=True).

Oh but you have to set retain_graph for all the runs :slight_smile: even the 1st.

So that’s what I’m confused about.
thank you again.

i use the code print(list(model.parameters())[0].grad)
but get None even in the first batch.

Hi,
it seems your network doesn’t backprop properly.

you can check where by doing

for name,param in model.named_parameters():
if param is None:
   print(name)

So if it happens just after the loss (which is probably the case as I assume it’s been coded by yourself, you may find the guilty.

thank you for your kind help,i will check it.:+1: