AdamOptimizer in tensorflow and convert it to pytorch

How I can convert the following code in tensorflow to pytorch:

`self.optimizer = tf.train.RMSPropOptimizer(learning_rate=self.learning_rate).minimize(self.loss)

self.neg_log_prob = tf.nn.sigmoid_cross_entropy_with_logits(logits=self.select_y, labels=self.y_PA)

self.select_loss = self.neg_log_prob * self.average_reward

self.optimizer_selector = tf.train.AdamOptimizer(0.001).minimize(self.select_loss)
`
The self.select_loss in last minimize seems is tensor not scalar?

In Pytorch you don’t need to define which loss your optimizer should minimize.
You just have to pass the parameters you would like to update. Have a look at this tutorial to see the usage.

Hi, Thanks for your reply.
I understand that in Pytorch I just need to pass the parameters. However here the loss is not scalar! When I ask to back prob ’ self.select_loss’ , Pytorch says it needs to be scalar!
I want to find gradient of the self.select_loss for against all the model variables

If any of tensors are non-scalar (i.e. their data has more than one element) and require gradient, the function additionally requires specifying grad_tensors .

What should be the value of grad_tensors?
The pytorch code is:

optimizer = optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=args.lr)
optimizer.zero_grad()
cross_entropy_loss=torch.nn.BCELoss(reduction='none')
neg_log_prob= cross_entropy_loss(y_preds, y_PA)
select_loss=neg_log_prob*average_reward
select_loss.backward(grad_tensors=?)
optimizer.step()

Usually a tensor of ones is a good idea (due to the muliplicative behavior of grad computation):

select_loss.backward(grad_tensors=torch.ones_like(select_loss))
1 Like