Is there something I am missing? How can I access saved tensors? Is there any other documentation than autograd that has more examples of autograd functions? Thanks a lot!
First thing is that you can’t return a Variable from forward - it expects tensors and will automatically wrap them in Variables and connect them up to the existing graph. You shouldn’t unpack and re-pack Variables in the middle of computation, because that will break continuity of history. You need to do something like that:
def my_criterion(input, target, epoch, isLabeled):
if (isLabeled.data > 0).all():
return loss * alpha * epoch
return MyCriterion()(input, target, epoch, isLabeled)
Another thing. You should never call the forward method directly. You should instantiate the Function class and call it like you’d call a function. You can use the function I provided as a more convenient wrapper.
It’s also unclear whether what you’re writing needs to be a Function. If you want the autograd library to automatically compute the backwards pass for your operation, and you can represent the operation as a combination of existing autograd-enabled functions (as it looks like you’ve done in forward), you should just define a Python function or a Module.
If you do need to write a new Function subclass, that means you aren’t able to represent the operation as a combination of existing functions with known derivatives, and you have to implement the backward pass yourself (you can’t just call .backward()). The computations inside the forward and backward methods of a Function subclass take place on Tensor objects, not Variables.
A quick follow up question - since Module doesn’t have an explicit .backward() method, how do I exactly backprop on a loss function that is a module? Is it enough if I just use .train() instead?
all of the operators inside a Module's forward function have a backward defined, because the input is a Variable. So the backward for the module is automatically defined by autograd.