What does `foreach` do in AdamW?

Hello! AdamW has a foreach parameter, which states :

foreach (bool, optional) – whether foreach implementation of optimizer is used (default: None)

I tried search for what the “foreach implementation” is, but couldn’t find it. Could anyone explain what this is?

Thank you in advance!

1 Like

there are some functions with the prefix of _foreeach_ such as torch._foreach_exp and torch._foreach_add that take one or more lists of tensors. They apply some counterpart native function such as torch.exp and torch.add to each element of input tensor(s). If a certain condition is met such as tensors are hosted on the same device and of the same dtype then we can expect much less CUDA kernel calls than just iterating over the input lists and call torch functions.

5 Likes

I see! Thank you for your classification.

Hey, thank you for your answer. Could you give a source, where torch._foreach_add is explained in greater detail?

  • Is it an inplace operation for one of the two arguments?
  • Is there any way to replace this operation by Pytorch functions, looping directly over both lists?