TypeError: 'NoneType' object is not iterable

I am trying to implement a Deep Q network (DQN) using a graph convolution network (GCN) using the dynamic graph library (DGL). The base code is taken from this repository (https://github.com/louisv123/COLGE/blob/master/agent.py). However after I calculate the loss between the policy network and target network and run loss.backward() I get TypeError: 'NoneType' object is not iterable . I have printed the loss value and it is not None.

I ran the original code from the repository and it is running perfectly. I have also implemented the GCN code in DGL and it seems to run. I have also visualized the graph using the torchviz. But I am unable to find why it is giving an error.

The code snippet is given below:

current_q_values= self.model(last_observation_tens, self.G)
next_q_values=current_q_values.clone()
current_q_values[range(self.minibatch_length),action_tens,:] = target
L=self.criterion(current_q_values,next_q_values)
print('loss:',L.item())
self.optimizer.zero_grad()
L.backward(retain_graph=True)
self.optimizer.step()

loss: 1461729.125

  ---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-17-cd5e862dd609> in <module>()
     62 
     63 if __name__ == "__main__":
---> 64     main()

7 frames
<ipython-input-17-cd5e862dd609> in main()
     55         print("Running a single instance simulation...")
     56         my_runner = Runner(env_class, agent_class, args.verbose)
---> 57         final_reward = my_runner.loop(graph_dic,args.ngames,args.epoch, args.niter)
     58         print("Obtained a final reward of {}".format(final_reward))
     59         agent_class.save_model()

<ipython-input-14-45cfc883a37b> in loop(self, graphs, games, nbr_epoch, max_iter)
     45                         # if self.verbose:
     46                         #   print("Simulation step {}:".format(i))
---> 47                         (obs, act, rew, done) = self.step()
     48                         action_list.append(act)
     49 

<ipython-input-14-45cfc883a37b> in step(self)
     16         #reward = torch.tensor([reward], device=device)
     17 
---> 18         self.agent.reward(observation, action, reward,done)
     19 
     20         return (observation, action, reward, done)

<ipython-input-16-76d612e8663c> in reward(self, observation, action, reward, done)
    129               print('loss:',L.item())
    130               self.optimizer.zero_grad()
--> 131               L.backward(retain_graph=True)
    132               self.optimizer.step()
    133 

/usr/local/lib/python3.6/dist-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
    148                 products. Defaults to ``False``.
    149        
--> 150         torch.autograd.backward(self, gradient, retain_graph, create_graph)
    151 
    152     def register_hook(self, hook):

/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
     97     Variable._execution_engine.run_backward(
     98         tensors, grad_tensors, retain_graph, create_graph,
---> 99         allow_unreachable=True)  # allow_unreachable flag
    100 
    101 

/usr/local/lib/python3.6/dist-packages/torch/autograd/function.py in apply(self, *args)
     75 
     76     def apply(self, *args):
---> 77         return self._forward_cls.backward(self, *args)
     78 
     79 

/usr/local/lib/python3.6/dist-packages/dgl/backend/pytorch/tensor.py in backward(ctx, grad_out)
    394     def backward(ctx, grad_out):
    395         reducer, graph, target, in_map, out_map, in_data_nd, out_data_nd, degs \
--> 396             = ctx.backward_cache
    397         ctx.backward_cache = None
    398         grad_in = None

TypeError: 'NoneType' object is not iterable```

Kindly help.

Hi,

Do you get any more information from the stack trace?
Can you run with anomaly mode enabled doc to see if you get a better stack trace?

Thank you for the reply.
I ran the code with anomaly mode enabled

with autograd.detect_anomaly():
              (last_observation_tens, action_tens, reward_tens, observation_tens)=self.get_sample()
              target = reward_tens + self.gamma *torch.max(self.model_target(observation_tens, self.G) + observation_tens * (-1e5), dim=1)[0]
              current_q_values= self.model(last_observation_tens, self.G)
              next_q_values=current_q_values.clone()
              current_q_values[range(self.minibatch_length),action_tens,:] = target
              L=self.criterion(current_q_values,next_q_values)
              print('loss:',L.item())
              self.optimizer.zero_grad()
              L.backward(retain_graph=True)
              self.optimizer.step()'''

and got the following output
/pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:40: UserWarning: No forward pass information available. Enable detect anomaly during forward pass for more information.
How do I enable detect anomaly during forward pass?
I also updated the question with the full stack trace.

You seem to have it already available during the forward…
Can you provide a small code sample (30 lines) that reproduces this issue?

The error is self-explanatory. You are trying to subscript an object which you think is a list or dict, but actually is None. This means that you tried to do:

None[something]

This error means that you attempted to index an object that doesn’t have that functionality. You might have noticed that the method sort() that only modify the list have no return value printed – they return the default None. ‘NoneType’ object is not subscriptable is the one thrown by python when you use the square bracket notation object[key] where an object doesn’t define the getitem method . This is a design principle for all mutable data structures in Python.

TypeError: ‘NoneType’ object is not subscriptable
The error is self-explanatory. You are trying to subscript an object which is a None actually…

Example 1

list1=[5,1,2,6] # Create a simple list
order=list1.sort() # sort the elements in created list and store into another variable.
order[0] # Trying to access the first element after sorting

TypeError Traceback (most recent call last)

in ()
list1=[5,1,2,6]
order=list1.sort()
----> order[0]