Preallocate memory for function outputs

Most or all basic pytorch operations have optional “out” argument, that’s exactly preallocated memory mode. Unfortunately, errors are thrown when tensor arguments require gradients. One workaround is to wrap this in autograd.Function, but this also requires manually writing backward().

1 Like