Is it safe to create new gpu tensor during cuda graph capturing?

I learned that when capturing a CUDA Graph, if a new tensor is created, the graph only records its memory address. During the replay phase, the graph will operate on that same address. Is there a risk of accessing an invalid address (if it wasn’t properly allocated) or overwriting memory belonging to other variables? Or does PyTorch’s internal memory pool (caching allocator) guarantee that these addresses remain valid and reserved?

import torch

static_tensor = torch.arange(5, device='cuda:0')
g = torch.cuda.CUDAGraph()
with torch.cuda.graph(g):
    static_tensor.copy_(torch.arange(1, 1+5, device='cuda:0')) # create a new tensor here
g.replay()

According to 4.2. CUDA Graphs — CUDA Programming Guide , memory allocation during cuda capturing will be recorded as a memory node:

Graph allocations have fixed addresses over the life of a graph including repeated instantiations and launches. This allows the memory to be directly referenced by other operations within the graph without the need of a graph update, even when CUDA changes the backing physical memory. Within a graph, allocations whose graph ordered lifetimes do not overlap may use the same underlying physical memory.

So I think it’s safe to create new gpu tensor because the lifetime will be managed by cuda and pytorch memory pool.