Can we learn different layers of network with different loss functions?

I am implementing one neural network using Pytorch. I want some layers of my network to be trained using the cross entropy loss, some layer using the REINFORCE gradient update. In Tensorflow, we can define the variable scope and then compute gradients of output w.r.t. different variable scopes using different optimizers. Can I implement the network I mentioned above in pytorch ?

Thanks in advance !!

on the master branch, we have torch.autograd.differentiate that provides this. We will have it in the next release.

Thanks for your quick reply :slight_smile:

Now, the torch.autograd.differentiate has been renamed to torch.autograd.grad