Should `__init__` arguments be stored as persistent buffers?


Modules are often “parameterized” by simple constant non-trainable values that define the behavior (in addition to trainable parameters). Should be such values a part of the persistent state?

class MyModule(nn.Module):
    def __init__(self, important_value: int, ...):
        # (A) self.important_value = important_value
        # VS
        # (B) self.register_buffer('important_value', torch.tensor(important_value))
        # VS
        # (C) self.register_buffer('important_value', torch.tensor(important_value), False)

From the perspective of my question, (A) and (C) do not differ semantically. However, (B) is different since it makes important_value the part of the persistent state. It means that the value provided by user can be overwritten after loading some state. It can be both good (it prevents users from proving values inconsistent with the state) and bad (user may really want to provide a custom value).

Is there the “preferred” or “expected” approach or should I choose whatever I like and the only requirement is to document the behavior properly?