"Concretize" the result of calling grad()

Hello!

I was wondering if it would be possible to “concretize” the result of the grad() call. I will explain.

Say we have some function sin(x^2) implemented in pytorch. When I call grad() it computes a graph that ultimately represents the expression cos(x^2)2x . Is it possible to get pytorch to return this expression to me ‘undecorated’ as opposed to as a graph of backward() → backward() calls? e.g. instead return a graph of x → [mul by 2] → [mul by …[cos …[mul by 2 ]]] etc. This way I can pass it around and differentiate it again without worrying about how it was generated (using grad).

What problems do you run into when differentiating it again? The graph will always contain blahbackward calls, and those are the functions that are actually run when you differentiate it - if you do cos(x^2)2x yourself you should expect to get the same graph.

I get an error when calling model(*unpack_inputs())
It says RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
even though the original input has .requires_grad_()