Backward error in DDPG

My algorithm is DDPG, and my actor network is unable to backpropagate. How should I handle this issue?
class ActorNet(nn.Module):
def init(self):
super(ActorNet, self).init()
init_w = 1e-3
self.input_size = 3
self.output_size = 1 + 1
self.fc1 = nn.Linear(self.input_size, HIDDEN_SIZE_1)
self.fc2 = nn.Linear(HIDDEN_SIZE_1, HIDDEN_SIZE_2)
self.fc3 = nn.Linear(HIDDEN_SIZE_2, self.output_size)

    init.kaiming_uniform_(self.fc1.weight)
    init.kaiming_uniform_(self.fc2.weight)
    init.kaiming_uniform_(self.fc3.weight)


def forward(self, x):
    x = self.fc1(x)
    x = torch.relu(x)
    x = self.fc2(x)
    x = torch.relu(x)
    x = self.fc3(x)
    x = torch.relu(x)
    return x

this is my actor-net frame,and the update is below
state, action, reward, next_state = self.memory.sample(self.batch_size)
state_batch = torch.FloatTensor(np.array(state))
action_batch = torch.FloatTensor(np.array(action))
reward_batch = torch.FloatTensor(reward).unsqueeze(1)
next_state_batch = torch.FloatTensor(np.array(next_state))
state_actor_batch = torch.cat((state_batch, action_batch), 1)
policy_Q = torch.mean(self.critic(state_actor_batch))
actor_loss = -policy_Q
self.actor_optimizer.zero_grad()
torch.nn.utils.clip_grad_norm_(self.actor.parameters(), 1)
actor_loss.backward(retain_graph=True)
self.actor_optimizer.step()
and my optim is
self.actor_optimizer = optim.Adam(self.actor.parameters(), lr=1e-3, weight_decay=1e-5)

I don’t know what the error is but often using retain_graph=True is wrong and causes issues trying to calculate gradients from stale forward activations, so could you explain why this argument is used?

1 Like

without using that argument ,it will rasing this error
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.

And what is important is that the weight and bias still stay none after the actor_loss.backward(),this causes my actor-net can’t update and the weight and bias does not change after the backward.I don’t know how this question.

I have solved the error of retain_graph=True.I use the policy_Q to calculate both actorloss and criticloss which causes the error.But I still have problem of weight and bias keeps none after backward().I don’t know how to handle this question.This is the key problem.

I assume you have removed retain_graph=True and fixed the originally raised error?
Trainable parameters will get a valid gradient assigned to their .grad attribute during the backward pass if they were used in the associated computation graph. If some parameters show a None gradient after calling loss.backward() they weren’t used to compute loss and are thus not in the computation graph or they were detached from the graph.

I am sorry that my question hasn’t been solved.
so is that the weight and bias in actornet haven’t been used to compute the loss?that’s why they always keeps none gradient?
thx so much for your help,I have been confused for a few weeks.I will check train frame,and reply you later.
thank you very much

Yes, as you can see here:

state_batch = torch.FloatTensor(np.array(state))
action_batch = torch.FloatTensor(np.array(action))
state_actor_batch = torch.cat((state_batch, action_batch), 1)
policy_Q = torch.mean(self.critic(state_actor_batch))
actor_loss = -policy_Q
actor_loss.backward()

actor_loss uses the state_/action_batch inputs, which are not coming from the actor and then calls into self.critic. ActorNet is thus never used to calculate actor_loss.

1 Like

wow,i got it.thank you so much for your help!