Synchronization behavior of torch.cuda.memory_stats and related

Hi, I have a question about memory_stats and related functions, especially memory_allocated and max_memory_allocated. There seems to be no documentation ( or authoritative information in the forum ) as to whether they require or trigger a synchronization. If not, are their return values just derived from host-available allocator data and information about enqueued ( but potentially not yet executed ) cuda operations ( can they actually give reliable information that way, that won’t possibly change when the operations are actually run )?