there are some functions with the prefix of _foreeach_ such as torch._foreach_exp and torch._foreach_add that take one or more lists of tensors. They apply some counterpart native function such as torch.exp and torch.add to each element of input tensor(s). If a certain condition is met such as tensors are hosted on the same device and of the same dtype then we can expect much less CUDA kernel calls than just iterating over the input lists and call torch functions.