When auto a = tensor, what happened?

In ConstantPadNd.cpp, I find this:

    auto output = at::empty(new_shape, self.options());
    output.fill_(value);

    auto c_output = output;
    for (int i = l_diff; i < l_inp; i++) {
        auto pad_idx = 2 * (l_inp - i - 1);
        if (pad[pad_idx] > 0) {
            c_output = c_output.narrow(i, pad[pad_idx], c_output.size(i) - pad[pad_idx]);
        }
        if (pad[pad_idx + 1] > 0) {
            c_output = c_output.narrow(i, 0, c_output.size(i) - pad[pad_idx + 1]);
        }
    }
    c_output.copy_(c_input);
    return output;
}

Does c_output share tensorImpl with output?

@tgsmdww
auto output = at::empty(…) -> output is a newly allocated tensor
auto c_output = output -> c_output and output share the same underlying storage
c_output = c_output.narrow -> c_output’s storage is not change.
c_output.copy_(c_input) -> c_output copy c_input into its storage

return output -> output shared storage with c_output due to above.

1 Like

Thank you :grin:!!!