Possible to mask out parts of an autograd Variable?

Hello!

I am wondering if it possible to get an output from a network and then mask some aspects of it before passing it to the loss function, to prevent backprop on some of the neurons?

Thanks!

Hi,

This post should andwer your question.